Is Video Data Big Data?

 

Overview

The way data is collected and used is constantly transforming before our eyes. The creation of the internet, low storage costs, and use of social media has drastically increased the capabilities of data and has created an entirely new industry. Corporations can leverage this data to make information-driven decisions. When considering data, the first thing that may come to mind is a spreadsheet with well-formatted and recorded data. However, new data harvesting methods have become more popular that often deal with unstructured data. Unstructured data is data that is not formatted, however, it can still be extremely useful and valuable. For example, text from social media as well as data derived from videos are used to draw inferences about consumers that other more traditional data could not. For more information on unstructured data, please visit this article.

What is Video Analytics?

Video Analytics has the ability to combine multiple data analytic techniques. Facial Recognition software can be used to identify people in videos. Unstructured text data can be drawn from videos that contain text. Sentiment Analysis can be conducted on user’s comments and reactions to videos. There are also many more insights and information that can be derived from video’s that are not available with other data mediums. Unstructured video data is created daily. With the rise of social media and the internet, video sharing will only increase. The invention of sites like YouTube and Netfilx have increased video streaming by millions of users; almost every user of the internet will stream some sort of video media daily. This is beneficial for corporations and data analysts as video is a medium that can tell a lot more about consumers than other types of data. According to some estimates by IBM, 80% of internet traffic will be caused by video streaming by 2019. This is a huge industry and excellent opportunity for corporations to analyze.  

In my opinion, sentiment analysis conducted on comments on video’s is one of the most useful data that can be derived from video’s. For example, let’s say a company posts a new advertisement online. If thousands of consumer’s comment on the video, machine learning can be used to determine what the overall feeling was about the advertisement, and therefore, the product. If most of the comments are good, that means that the corporation did well in the advertisement and their product is being adopted well. However, if most of the comments are negative, they could see that there are some issues with their advertisement or that some changes need to be made to their product.

There are also uses of video analytics that are helpful for security purposes. Police and other government agencies can now use facial recognition software to catch criminals. For example, if a security camera were to capture the face of a terrorist suspect, the police could obtain this video. By using facial recognition software in conjunction with publically-available social media data and images, they could run the face and perhaps get a match. This would allow them to identify the suspect based on their social media profiles.

Issues

Although there are many benefits to video analytics, there are also some implications that need to be considered. The most important issues that arises is where to store the data that is collected from video analytics. The massive amounts of data that is produced daily from videos needs to be stored somewhere. Storing and accessing this data can be a costly and time-consuming process. Streaming large video files can be a huge obstacle that smaller businesses need to develop a solution for. As video data is becoming more advanced with better cameras and digital media. Storing these high-quality videos takes up a large amount of space and is expensive to do so. A process known as Transcoding looks to ease this storage strain. Transcoding is the process of converting digital data files to another data type. This often makes the file much smaller, therefore, easier to handle and cheaper to store. Also, it can make data compatible with any pre-existing storage system. Although there can be a loss of quality in the data, it is still accessible and can still be used for analytical purposes. For more information on transcoding, please visit this article.

By changing the format of a movie file, much less space needs to be used when storing its data in a database. There are many applications that are available for transcoding videos. For example, transcoding videos can easily be completed using Cloudinary. This service allows for its users to adjust the quality, bitrate, and codec of videos. This means that the stream will always be smooth regardless of the device or bandwidth.

Wrapping Up

I believe that corporations will only continue to develop video analytics. There are immense opportunities that the data collected from videos can give to analysts. Especially considering the impact that social media has on modern culture. As most social media sites are free to use, sites such as Instagram, FaceBook, and Snapchat use ads to gain revenue. Many of these ads are in video format. By analyzing people’s likes and interests, certain ads could be more appealing to them and attract their attention. To help identify these users, sentiment analysis could be conducted on their profiles and comments to help find their interests. As cameras and videos become more advanced, this industry will only grow. The technology needed to analyze video data needs to grow with it.

In conclusion, I believe that video data should be considered big data. It is increasing daily and is projected to soon take up most of the internet’s traffic. It would be foolish for corporations to ignore what people are doing on the internet for 80% of the time.

Bibliography

“Unstructured Data.” Wikipedia, Wikimedia Foundation, 3 July 2018, en.wikipedia.org.

“Transcoding.” Wikipedia, Wikimedia Foundation, 5 July 2018, en.wikipedia.org.

Posey, Brian. “Streaming Large Video Files in Today’s Big Data Environments.” SearchStorage, TechTarget, searchstorage.techtarget.com.

Woodie, Alex. “Analyzing Video, the Biggest Data of Them All.” Datanami, 26 May 2016, www.datanami.com.

What is Data Tiering


 

 

 

 

 

 

Overview

The location where data is stored depends on defining characteristics of the data. The most common determinant of where data is stored is how often the data is accessed. For example, data that is frequently accessed would most likely be stored in a more expensive, but higher performance location. Data that is infrequently accessed would be stored in a high-storage, but low cost solution. An example of such solution would be cloud-storage. Automated Data Tiering is the process of determining where data should be located based off of how often it is used (more details).

Caching is a form of data storage that is similar to data tiering but is slightly different. Tiered data places data permanently in a tier based on its usage. However, cached data is able to make use of multiple tiers. Caching data temporarily copies data to a high-performance processor when it is being accessed. When this data is not being accessed, it is stored on a lower-performance tier. It is a dynamic system that is able to switch between multiple tiers. Video games often rely on this type of data storage.

Automated Tiering

Monitoring and assigning storage locations for data by hand would be an extremely time-consuming and strenuous activity. Automated Tiering saves time for employees who deal with data. By automatically storing data based on usage, it frees up an employee’s time for other activities. In addition to saving time, Automated Tiering also can save a business money. Storing infrequently accessed data in expensive, high-performance locations would be an unnecessary expense. It would be much more cost-effective to store this data in the cheaper, lower-cost locations.

This emerging market has created opportunities for multiple Automated Tiering applications and services to arise. Some vendors are capable of utilizing cloud-data storage in their Automated Tiering. NetApp is a vendor that makes use of AWS in its data tiering storage method. NetApp’s data tiering storage system is called ONTAP Cloud. ONTAP Cloud stores data that is used frequently in Amazon EBS while data that is used less frequently is stored in Amazon S3. Due to the automatic tiering, ONTAP Cloud saves it’s users both time and money. Another vendor that conducts a similar service is Hitachi Vantara. Hitachi claims that, “65% of storage system capacity is used to store nonprimary inactive data”. They also claim that their automated data tiering system can save users over 70% in sorting costs.

Pros and Cons of Data Tiering

Although it is effective, this type of data storage is still a developing technology. As with any developing technology, there are some drawbacks that must be considered along with the benefits. The primary advantage of this system is the money that is saved in data storage costs. To store data in high-performance areas is expensive and will add up to a tremendous bill if left unchecked. By only storing data that is frequently accessed in these high-performance areas, a company will save lots of money. However, the lower-cost solutions are not as effective. When querying data form a lower-cost data storage location, query times would take much longer. It is up to a company to decide whether or not the cost-savings is worth the drop in performance for infrequently accessed data.

Another advantage of Data Tiering is the reduction in time and labor. For example, rather than paying an employee to manually monitor data usage and appropriately assign a data storage location, an application would automatically do it. An application is also able to process these decisions faster and more accurately than a human would be able to do. Humans should be doing work that requires abstract thinking that a computer is incapable of. Companies should look to automate any monotonous task such as this.

Best Practices for Tiered Storage

As mentioned earlier, it is important for a company to determine whether they are willing to sacrifice performance for infrequently used data in order to save costs. There are also other implications (such as data lifecycles and backing up data) that need to be considered.

The first best practice regarding Data Tiering is to consult with the data users. Talk to your employees to see what their opinion is on the data. For example, one employee may crucially need a certain set of data to be high-performance. If he or she is the only employee in the company accessing the data, it may be misleading that it is not being accessed very often. His or her job could be made much more time-consuming if the data they access is put in a low-cost storage location. Another topic to consider discussing with employees is how data is classified. Users of the data should be able to manually classify it themselves if they create it.

Another best practice of Teired Storage would be to determine the appropriate lifecycle of data. Different functions of a business use data at different time periods. For example, the Marketing and Sales teams may use data to make quick business decisions. After their decisions, the data becomes redundant to them. However, the Finance and Accounting teams of the business might still use that same data for a much more extended time period to generate reports, projections, and keep track of the books. The lifecycle of the data needs to cater to all the team’s needs. Please visit this article for more information on best practices.

Conclusion

Even though there are many benefits of Data Tiering (especially Automatic Data Tiering), the reduction in performance of less frequently used data still needs to be considered. A company needs to determine whether the savings in money will be worth the reduction in performance for some data. There are many applications and vendors that have frequently arisen that can meet companies’ needs in Automated Cloud Data Tiering. In my opinion, the benefits of Automated Tiering outweigh the cons, as long as proper planning and implementation practices are followed. I am eager to see how this industry will advance.

 

Bibliography

 

“What Is Automated Data Tiering ?” SearchStorage, TechTarget, searchstorage.techtarget.com.

 

“What Is Tiered Storage?” SearchStorage, TechTarget, searchstorage.techtarget.com.

 

“An Introduction to Data Tiering.” Cloudian, cloudian.com.

 

“Tiered Storage Strategies and Best Practices.” ComputerWeekly.com, ComputerWeekly.com, www.computerweekly.com.

 

Kovacs, Gali. “Storage Tiering with NetApp ONTAP Cloud and Amazon S3.” NetApp Cloud Central, cloud.netapp.com.

 

“Data Tiering – Automated Cloud Migration.” Hitachi Vantara, Hitachi Vantara Corporation, www.hitachivantara.com.

 

Data Warehousing: Definition, Architecture, and Concepts

Billions of Smart Devices are constantly creating data. If a business responsibly uses data, there are many advantages to Data Analytics. The invention of Cloud-Based Data Warehouses is revolutionizing how Data Analytics are performed.

Overview

Today, massive amounts of data is created and demanded by consumers. A business needs to be able to leverage the power that Big Data allows. Without utilizing some sort of Data Warehouse, information and data is almost impossible to make sense of. With the development of Cloud Storage technologies, Cloud-based Data Warehouses are rising in popularity. Cloud Data Warehouses are similar in function to traditional data warehouses, however, there are key differences that make them the superior. Rather than being stored on physical hardware, Cloud Data Warehouses hold data entirely online with no need for storage equipment. In this article, I will cover the characteristics of a Cloud Data Warehouse and outline the advantages it holds over traditional data warehouses. Also, I will cover the traditional Data Warehouse architecture as well as some less conventional architectures. Lastly, I will discuss some key concepts to Data Warehouses.

What is a Cloud Data Warehouse?

The most limiting and detrimental aspect of traditional data warehouses are the need to purchase and maintain expensive physical servers. The costs of maintaining physical servers are high; property needs to be leased to house them, the climate of the building needs to be controlled to a low temperature so they do not overheat, they need to be maintained, and eventually, replaced. Also, they consume a large amount of electricity which is expensive to maintain. A Cloud Data Warehouse has the same functionality of a traditional data warehouse; however, its costs, maintenance, and accessibility are much improved.

Cloud-based Data Warehouses do not require its customers to have on premise servers to store data. Instead, many companies such as Amazon Web Services sell the use of their servers to other companies. Rather than worrying about the costs of maintaining physical hardware and the database environment, a company can use AWS’ servers to house their data. Due to the lack of physical equipment, the initial setup for a cloud-based data warehouse is much quicker and easier than that of a traditional warehouse. Not only are Cloud-based Data Warehouses easy to install, they have superior functionality and performance than traditional data warehouses. Cloud Data Warehouses process queries using a more advanced method known as massively parallel processing. Rather than processing queries with a single processor, multiple processors are used. This allows for simultaneous searching of multiple databases. Therefore, complex queries requiring multiple databases can be conducted much quicker. See this article for more information on data warehouse concepts.

Data Warehouse Architecture

Traditionally, Data Warehouse Architecture is comprised of three tiers. Each tier is responsible for part of the process of storing and accessing data. The first tier to interact with external data is the Bottom Tier. This tier contains the Data Warehouse Server. After data has been extracted and transformed, it is stored in the Data Warehouse. From there, the data is accessible by the second tier. The Middle Tier of the Data Warehouse Architecture involves an OLAP server. This OLAP server is responsible for transforming the stored data into data that is useful and structured to be analyzed. Without an OLAP server, more advanced queries would be difficult to conduct. Although this data is structured to be queried, this is not the tier where data analytics are conducted. The Top Tier of the Data Warehouse Architecture involves the front-end processing tools. Analytical tools, data mining tools, and query and reporting tools comprise the Top Tier. This is the final step of examining the data. In this step, meaningful insights and trends are extracted from data that aids business decision makers by providing relevant statistics or data. Data analytics are revolutionizing the way that businesses make decisions as their insights may provide solutions in situations of uncertainty.

Less-traditional Data Warehouse architectures known as Kimball and Inmon focus on different aspects of the data warehouse. The Kimball approach focuses on the importance of data marts. A data mart is a collection of data that is specifically relevant to a certain branch of a business. The Kimball method designs the Data Warehouse architecture to combine multiple data marts. The Inmon approach considers the Data Warehouse as a grand storage location for all enterprise data. Then, the data is divided into separate data marts. Please visit this article for more information on Data Marts.

Key Concepts

To better understand the key concepts of a Data Warehouse, I will discuss some key concepts. As I stated earlier in the article, one of the most important advantages of a Cloud-based Data Warehouse is the increased performance. Performance is a measure of the response time of queries. Users of a data warehouse expect their queries to be conducted quickly; a slow query indicates poor performance and that a change needs to be made.

Scalability is another term that is important to understand how a Data Warehouse operates. Scalability is the ability of a Data Warehouse to meet increases in query demands. If queries are becoming increasingly complex, the system must be able to grow and expand its processors to continue to process the queries quickly. Cloud-based Data Warehouses are inherently more scalable than hardware-based Data Warehouses. For more information on key concepts and measures of databases, please visit this article.

Conclusion

There are many factors that have led to a rise in popularity of Cloud-based Data Warehouses. The invention of the internet has allowed for instantaneous communication from all areas of a global company. The rise of social media is a never-ending supply of data about consumers. And finally, the availability of cheap data storage thanks to Cloud Data Warehouses. The development of cloud technology has advanced the data collection and analysis process. Businesses and Data Analysts are now able to leverage the scalability of a cloud-based Data Warehouse; this leverage is invaluable when dealing with Big Data. Data collection will only continue to grow. In my opinion, the future of collecting, managing, and analyzing data will be entirely conducted using Cloud Data Warehouses. If you are interested in finding out more about the differences between cloud-based Data Warehouses and traditional data warehouses, please visit this article.

Bibliography

“Data Warehouse Architecture: Traditional vs. Cloud.” Panoply, panoply.io.

Rouse, Margaret.

“What Is MPP (Massively Parallel Processing)? .” What Is?, Tech Target, whatis.techtarget.com.

Adelman, Sid. “Measuring the Data Warehouse.” EIM Institute, EIM Institute, 15 Apr. 2016, www.eiminstitute.org.

“Cloud vs. On-Prem Data Warehouse.” Xplenty, Xplenty, www.xplenty.com.

 

“Data Mart.” Wikipedia, Wikimedia Foundation, 4 May 2018, en.wikipedia.org.

 

How is Sales Connected to Data Science?

Overview

Almost every company naturally utilizes cross-function operations. As more money and resources are spent on IT and technology, its importance and influence is greater on other business functions. Sales in particular is a function that is becoming increasingly more technical and is reliant upon technology more than ever. There are many new emerging technologies and data manipulations that will revolutionize the way that sales processes are conducted. Rather than relying on traditional methods such as cold-calling, new methods of attracting customers must be adopted. In this article, I will discuss how sales is connected to data science.

Sales Development Representative

As technology advances and grows, so should a sales department. How sales is conducted today is much different than it was ten years ago. This has affected several key roles in sales, namely a Sales Development Representative.  A Sales Development Representative adds value for their company as their job is to prospect potential leads for customers. This representative then relays the high-quality leads to their sales team which is more focused on closing deals. Without the Sales Development Representative, regular salesmen would have a much tougher time juggling finding potential customers and closing sales.

Sales and Data Science

Analytics are having an increasingly crucial role in functions of businesses that are only just beginning to develop. Data science is prevalent to salespeople as it gives them exactly what they need to be successful; more information on the customer and meaningful insights that increase the efficiency of sales. Sales Development Representatives are utilizing something known as ‘Sales Intelligence’ to be more successful at their job. ‘Sales Intelligence’ is a type of technology that analyzes data for sales. These meaningful insights attract sales departments to better leads and can help develop a more effective approach, thus, increasing sales.

This works by using a combination of multiple analytic strategies. Businesses often rely upon historical data to determine potential customers and how to approach them. However, combining this traditional strategy with algorithms that are based on data sets will allow for a business to find higher-grade leads. Another strategy that is under development and will only increase in relevance is the use of machine learning and AI in sales. Machine learning is the process of teaching a computer how to learn from data and outcomes as a human would. Feeding a computer vast amounts of data with its outcomes is known as providing it with ‘training data’. The computer than analyzes this data and its outcomes and determines if there are any statistical factors that contribute to the outcomes. Computers are able to recognize patterns in large data sets way better than a human would. This ‘training data’ is then used on the data set we would like to observe. The computer takes the knowledge and insights that it learned from the training data and applies it to the new data set. This will allow for the computer to make an accurate prediction on whether or not a potential customer will likely buy a product based on their demographics, psychographics, and geographics. This is way more advanced pattern recognition than a human would ever be able to conduct. A computer is able to analyze and interpret thousands of rows of data, including unstructured data, to draw these conclusions. This technology is only becoming more advanced.

How can a Business Leverage Sales Intelligence?

The leap in capabilities of data analytics and data science needs to be implemented within sales departments in order for them to be successful in the growing global economy. Door-to-door salesman and cold-calling is no longer nearly as effective as they once were. Adopting a more sophisticated lead-finding strategy is essential to gaining the analytical edge over a competitor.

A sales department must now work closely with the analytics department as their roles are continuing to overlap. Analytics will allow for better recognition of leads based off of advanced machine learning techniques that I discussed earlier. This will allow for superior lead generation and lead scoring.

Analytics are also useful as it gives us more information about the consumer to aid in the closing process. For example, analytics allow a sales team to better understand the demographics and geographics of a potential lead. This will allow them to better customize their approach to what would be effective at closing a sale for a particular customer. By understanding the customer and their motivators better, a business will be better suited to knowing what salesperson or team to assign the potential customer to.

In order to maximize profits, a business needs to maximize their customer’s value. This means to cross-sell or upsell to them. Knowing a customer’s motivators will allow for a salesperson to more effectively tailor their approach and ultimately sell them more product or services. Using historical sales data will also give a company a prediction on what the customer is most likely to buy based on current needs.

Pricing is one of the most important factors to consider as it can either attract customers or repel them to a competitor’s product. Using historical sales data and machine learning allows for a company to make accurate predictions on how much a customer is willing to pay for a product. It can also factor in the competitive environment and growth in demand. This allows for a dynamic pricing model to be plausible. A dynamic pricing model utilizes a different price for different customers, depending on their needs. For example, if a customer almost always buys a similar product from a competitor, it might be a good idea to approach the customer and undercut the competitor’s price.

Closing Thoughts

The introduction of new data analytics technology and strategies has rapidly changed sales practices, keeping pace with the changes in consumer practices and preferences. While to some companies this may seem like a disadvantage, it can actually be manipulated to be a revenue-booster for companies who leverage analytics and data science well. Having a more-complete picture of potential and existing customers allows for a sales department to better tailor their approach and will ultimately result in more sales. I am personally interested in how advanced machine learning and AI will be in a few years time. Will there be a need a traditional salesmen in the future, or will this be entirely conducted

Bibliography

Jakar, Daniel . “The Toolkit Every Sales Development Representative Needs.” Lusha, Lusha, 31 Dec. 2017, www.lusha.co.

“What are Sales Development Reps (SDRs)?” RingDNA, RingDNA, www.ringdna.com.

Atkins, Charles , et al. “Unlocking the Power of Data in Sales.” McKinsey, McKinsey & Company, Dec. 2016, www.mckinsey.com.

Marr, Bernard. “How Machine Learning Will Transform The Sales Function.” Forbes, Forbes, 6 July 2017, www.forbes.com.

“Sales Intelligence.” Wikipedia, Wikimedia Inc. , en.wikipedia.org.

Backup and Recovery on AWS vs Azure

Overview

For any business, there is a serious risk of losing crucial files in the event of an unexpected outage. Backup and recovery is the process of copying files that are critical to operations to a separate second environment. In the case of a disaster, the critical files would be saved as they would be in a separate location. As cloud-based storage solutions become more affordable, cloud-based backup storage is rising in popularity for many IT sectors. When the need arises, backups can significantly cut a business’ recovery time and restore data much faster.

Disaster recovery is a different strategy that some businesses use to mitigate risk in outages. A disaster recovery plan is a predetermined procedure that a business must follow in order to minimize the negative effects due to a disaster or outage such as loss of data or down-time. This process often include timeframes in which the tasks must be completed. For a more complete picture of a disaster recovery plan, please refer to this article that I have previously written.

Almost every business that deals has its own servers or is at risk of a crash should have at least one of the two options in place (backup vs. disaster recovery plan). Since a disaster recovery plan also encompasses backups, it is my recommendation that every business at risk creates its own disaster recovery plan as well as creating backups. However if this is too time-consuming or expensive, the bare minimum would be having a backup on critical files. Amazon Web Services has emerged as a leader in this industry. In the following section, I will discuss how AWS backup and recovery.

How does AWS handle Backup and Recovery?

Amazon Web Services are increasing in popularity as they offer customers multiple zones of availability, scalability, and cloud-based computing and storage at a relatively low cost. As a global leader in cloud services, Amazon has its own set of solutions and approaches to developing a backup and recovery solution.

The first steps that AWS suggests for a recovery plan would be to identify RTO and RPO requirements. RTO measures how long a system is able to be under repair before there are serious long-term effects. RPO measures how often a system needs to back up storage to be able to continue with relevant data after a disaster. Every business will have their own unique values for RTO and RPO depending on factors that are exclusive top their situation. For more information on RTO and RPO’s please visit this article.

AWS offers a recovery solution that is applicable to a business whose workload environment is entirely run on AWS. This is referred to as a Cloud-Native Infrastructure plan. If this situation is pertinent to a business, Amazon has many built in tools that turn a disaster recovery nightmare into a manageable solution.

The first tool is known as EBS Snapshot-based Protection. This tools allows for AWS users to utilize block storage for their needs whether it be a database backup or unstructured data backup. This tool copies an Amazon EBS volume and transfers it to Amazon S3. This duplication process stores the data across multiple of Amazon’s availability zones. Therefore, in case of the failure of any availability zone, the data will still be stored at another zone. This tool is functionality can be adjusted to perform at any level depending on the requirements of its user.

Amazon Web Services also offers tools that are specifically tailored to database backups. A user has the option of running their own database on EC2 or make use of Amazon Relational Database Service (Amazon RDS).  RDS is useful as it allows for its users to create a backup live read-only copy of their database. This is useful for databases that must always be running and whose recovery time needs to be as short as possible.

How does Microsoft Azure handle Backup and Recovery?

One of Amazon Web Services greatest competitors in the cloud services sector would be Microsoft. Microsoft has launched its own competing cloud-based storage and computing entity called Azure. Azure offers multiple options for storing data as well, including: locally redundant storage and geo-redundant storage. It offers even more backup component suggestions.

The first backup component is Azure backup (MARS) agent. These backups are stored in the Recovery Services vault of Azure. This component does not require a separate backup server. This component can be deployed on any Windows server suing Azure.

The second backup component offered is known as System Center DPM. This component protects workloads by creating application-aware snapshots. This allows for files, folders, VMs, applications, and workloads to be backed up to the Recovery Services vault. This component also is able to store backups at a locally attached disc or tape.

Another backup component that is offered is known as Azure Backup Server. This backup server is also able to create application-aware snapshots, is compatible with the Recovery Services vault and is able to back up VMware VMs. However, this backup server does not require a System Center license. For more information on Azure, please visit this article.

AWS vs. Microsoft Azure

Both Amazon Web Services and Microsoft Azure are well-developed and thorough services. Thus, both offer high-quality backup and recovery solutions. In this portion of the article, I will compare the two option’s backup and recovery solutions.

Similarities:

  •         Scalability

Both AWS and Azure are highly scalable. The user does not have to worry about limits on administrative overhead. Also, the level of service is easily manipulated for either option.

  •         Durability

Either service will provide its user with high durability. Both options store their backup data in multiple locations so there is a slim chance that any data will be lost in the case of a disaster.

  •         Pay-as-you-go

This sort of payment method seems to be the norm when considering cloud-based services. This method of payment means that a user will only have to pay for services as they need them. No long-term contracts or fees create flexibility for customers.

Concluding Thoughts

In my opinion, either Azure or Amazon Web Services would be a good option for a company that is looking to back up its files through a cloud-based service. However, simply because it has been around longer, I would personally use AWS services. They have been proven to be reliable and have an extremely impress durability of 99.999999999% for the objects stored within it.

Bibliography

“Operational backup, recovery, and DR for Amazon Web Services.” N2WS, N2SW Software Inc., n2ws.com.

Warminski, Joe . “Backups vs. Disaster Recovery – Covering Your Bases.” Think IT, SingleHop LLC, 9 Aug. 2017, www.singlehop.com.

“Backup and Recovery Approaches Using AWS.” Amazon Web Services , Amazon Inc., June 2016, d0.awsstatic.com.

“Backup and Recovery.” Amazon Web Services, Amazon Inc., aws.amazon.com.

Markgalioto. “What is Azure Backup?” Microsoft Docs, Microsoft Inc., docs.microsoft.com.

“Disaster Recovery and Backup Solutions.” Microsoft Azure, Microsoft, azure.microsoft.com.