Cheap Research Paper Writing Services UK

Looking For Research Paper Writing Services That May Help You in Completing A Research Paper As Per The Requirements of A Certain Journal? Have Collected Primary Data But Do Not How To Use It in A Research Paper Format And Need Assistance From A Reliable Research Paper Writing Help?

Worry no more as you have come to the right place!!!!!!!!!!At Cheap Essay Writing UK, you can get your desired research paper writing service on time and in affordable rates with guarantee to success!

Whether you are a doctorate student, or looking for a better job in an educational institute; a research paper published in an authentic academic or non academic journal validates your degree and adds value to your curriculum vitae.

However it always requires time, efforts and higher writing skills to write a research paper and get it published.

  1. You wish to get a research paper published in an authentic research journal on your name but don’t have enough knowledge, writing skills to write it up to the standard of that journal thus seeking out help from a professional research paper writing services?
  2. There is a research idea in your mind and you collected some data but confused about selecting an appropriate data analytical tool so need research paper help from authentic research paper writing services?
  3. You have knowledge and skills to write a research paper but are busy with your job and don’t find enough time to focus on it thus looking for a reliable company that may provide you professional research paper writing services?

No matter what problem are you facing in writing your research paper, we are here to help you complete your required research paper in exact format that you have got from your institution. We have writers who are not only expert in research paper writing but also have access to different online libraries, databases. Thus we can provide you a well researched and well written research paper within your budget and without compromising on quality.

At Cheap Essay Writing UK, We Offer Research Paper Writing Services That You Can Always Rely On!!

At Cheap Essay Writing UK, we are offering research paper writing services with 100% customer satisfaction. Here we ensure that you get perfect research paper on time and for this we assign a team of dedicated writers and data analysts to complete your research paper. Further to ensure 100% accuracy we forward your research paper to our quality assurance department where our editors proofread it and run a plagiarism scan.

So, Cheap Essay Writing UK is The Best Place Where You Can Avail Research Paper Writing Services Without Any Hesitation.

Let’s introduce you to some special benefits you get with our research paper writing services;

Research Paper Writing Service By Qualified UK Research Writers.

With our research paper writing services, you are guaranteed that a dedicated UK research writer will complete your research paper. These writers have full command on English language, different citation styles, research methods and techniques and they also have know how on using data analysis tools such as STATA, SPSS or Matlab.

Research Paper Writing Services With 24 Hours Customer Support.

We provide live customer support service round the clock and even on weekends. So, you never need to be worried about the progress of your paper as customer support service representatives are always there to update you about it.

Research Paper Writing Service With Guarantee to Continuous Communication with the writer.

As soon as you place your order of a research paper, a dedicated writer is assigned to it who sends an email to the client immediately confirming that order details have been reviewed and research paper is now under process. All through the writing process you are in touch with your writer. You can get updates, ask for drafts or revisions.

Research Paper Writing Service With Complete Customer Satisfaction.

With our research papers writing service we guarantee 100% Customer satisfaction.

Order Now For Research Paper Writing Services UK To Get Best Grade At All!

Revlon and Avon Analysis


Table of Contents

Executive summary 2

Introduction 3

Company profile and description for Revlon and Avon…………………………………………………………….………………3

Performance and capital analysis……………………………………………………………..………………………………….…………7

Revlon and Avon “Tax burden”…………………………………………………………………………………………….….………………7

Revlon and Avon “Interest Burden”…………………….…………………………………………………………………………………..8

Revlon and Avon “operating margin”………………………………………………………………………………………………………8

Revlon and Avon “asset utilization”………………………………………………………………………………………………….……9

Revlon and Avon “financial leverage”…………………………………………………………………….……………………………..10

Capital structure …………………………………………………………………………………………………………………………………..12

The working capital analysis “Revlon &Avon” 14

Day receivable  14

Cash cycle conversion 15

Day’s inventory  17

Day payable…………………………………..……..……………………………………………………………………………………………….18

 Moody’s bonding rating analysis…………………………………………………………….. ………………………….19

Executive summary

This paper tends to analyze the performance of two main companies, The Company’s include Revlon Inc and Avon the two companies have a huge impact in the stock market and mainly compete with each other in product sales and distribution to service offer this research and analysis will mainly focus on the two company’s sales, and market performance.

This research  permits to examine existing cross connection between Revlon Inc and Avon Products Inc. hoping to measure up the impacts of business volatilities on Revlon and Avon and check how they will expand away market hazard if joined in the same portfolio for a given time skyline. You can likewise use pair exchanging procedures of coordinating a long position in Revlon with a short position of Avon. If you don’t mind likewise check progressing drifting unpredictability examples of Revlon and Avon.

Considering 30-days venture skyline, Revlon Inc is required to create 0.44 times more degree of profitability than Avon. Then again, Revlon Inc is 2.26 times less unsafe than Avon. It exchanges around 0.1 of its potential returns per unit of danger. Avon Products Inc is at present creating about -0.02 for every unit of danger. On the off chance that you would put 3,611 in Revlon Inc on july 15, 2015 and offer it today you would win a sum of 64.00 from holding Revlon Inc or create 1.77% arrival on speculation more than 30 days.



Company profile and description for Revlon and Avon


Revlon, Inc., consolidated on April 24, 1992, produces, showcases and offers the world over a scope of magnificence and individual consideration items, including makeup, hair shading, hair consideration and hair medications, excellence devices, men’s prepping items, against per-spirant antiperspirants, scents, skincare and other magnificence care items. The Company works through two sections: the Consumer portion and the Professional section. The Company’s Consumer fragment comprises of items that are produced, advertised and sold fundamentally inside of the mass retail divert in the United States and globally, and also certain retail chains and other strength stores outside the United States, under brands, for example, Revlon, Almay, Sinful Colors and Pure Ice in beautifying agents; Revlon Color Silk in ladies’ hair shading; Revlon in excellence apparatuses, and Mitchum in against per-spirant antiperspirants. Buyer Segment

The Revlon brand comprises of face cosmetics, including establishment, powder, become flushed and concealers; lip cosmetics, including lipstick, lip gleam and lip liner; eye cosmetics, including mascaras, eyeliners, eye shadows and temples items, and nail shading and nail consideration lines. Establishments inside of the Revlon brand incorporate Revlon Color-Stay, Revlon Photo-Ready, Revlon Age Defying, Revlon Super Lustrous, Revlon Color-Burst and Revlon Grow Luscious. The Company’s Almay image comprises of hypo-allergenic, dermatologist-tried, aroma free beauty care products and skincare items. The Almay brand comprises of face cosmetics, including establishment, squeezed powder, groundwork and concealer; eye cosmetics, including eye shadows, mascaras and eyeliners; lip cosmetics, and cosmetics removers. Establishments inside of the Almay brand incorporate Almay Smart Shade in face; Almay Intense i-Color in eye, and Almay Color + Care in lip.

order essay now!


The most seasoned magnificence organization in the United States, Avon Products, Inc. has developed from an unobtrusive line of scents sold way to-way to one of the world’s driving image of beauty care products. It makes and offers beautifiers, aromas, toiletries, embellishments, attire, and different enriching home furniture. Avon utilizes an extraordinary direct-offering system, which was incredibly in charge of its inconceivable achievement in the 1950s and 1960s, when ladies were effortlessly found in the home for deals purposes. After unsuccessful endeavors at broadening into the human services administration industry left the organization with monstrous obligations in the late 1980s and mid 1990s, Avon started to refocus on its roots: magnificence items and direct offering. Regularly, Avon conveys beauty to the lives of ladies everywhere throughout the world. At Avon, excellence means discovering the right lipstick shade for a client; giving an income opportunity so a lady can bolster her family; and empowering a lady to get her first mammogram. Magnificence is about ladies looking and feeling their best. It’s about championing financial strengthening and enhancing the lives of ladies around the globe.

Avon is an organization saturated with convention, grounded by its center values and standards and also its vision “to be the organization that best comprehends and fulfills the item, administration and self-satisfaction needs of ladies – internationally.”

A main worldwide excellence organization and one of the world’s biggest direct merchants, Avon has very nearly $9 billion in yearly income. Its product offering incorporates beauty, form and home items, with such all around perceived brand names as Avon Color, A-NEW and Skin-So-Soft, Advance Techniques, Avon Naturals and mark.

Performance and capital analysis

Revlon and Avon “Tax burden

Revlon and Avon

In relation to the graph above tax burden tends have an impact on the two companies. Higher Taxes from Acquisition Should Strain Cash Flows .Revlon’s procurements for money charges remained at $72 million in FY13 against previous prior evaluation of $82 million.

Furthermore, an increment in EBIT going ahead, coming about because of higher net offers of the Revlon-TCG incorporated unit ought to prompt an increment in expense payables. It was expected, a compelling duty rate of 30% for FY14, with total expense procurements of pretty nearly $85 million contrasted with previous evaluation of $81 million. Changes made to expense estimate for 2014 and past have added to a 25% decrease in the graph shown above. In relation to Avon the year 2010-2011 the tax burden within the company was relatively constant towards the end of FY12 there was a slight drop, increase in 2013 and again a slight drop in FY14 respectively

Revlon and Avon “Interest Burden”


In relation to the above graph the interest burden in contrast to the above graph of the two companies within the industry is different. The interest burden in Avon Product Company since 2010 has been deteriorating. in the year  2010 to year 2011 there was a slight increase with a positive figure of 2.42 and in the year 2012 there was a decrease of about 11.88 in the fiscal year 2013 to 2014 there was a slight decrease recorded of about 1.56.

In Revlon Inc from 2010 to the fiscal year 2014 the interest burden margin within the industry was constantly shifting at a rate of 1% with increase and decreased observed in the year 2013.

It can be concluded that the interest burden of Revlon Inc is generally constant compared to that of Avon within the industry.

Revlon generally has the lowest interest burden compared to Avon.

Revlon and Avon “operating margin”

Operating margin gives investigators a thought of the amount of an organization makes (before premium and assessments) on every dollar of offers. At the point when taking a gander at working edge to focus the nature of an organization, it is best to take a gander at the adjustment in operating margin after some time and to think about the organization’s yearly or quarterly figures to those of its rivals. On the off chance that an organization’s edge is expanding, it is acquiring more per dollar of offers. The higher the edge, the better.

In relation to the two companies Revlon and Avon, the above graph implies that the margins are relatively in relation to the above, the operating margin of Avon is constant and thus its generating less dollars, same applies to the Revlon company whereby there is a constant margin   since the year 2010 to 2014 the operating margin of both companies is generally at the same level in the industry that had an increase in the year 2010-2011 to 2012 in which there was an increase from the year 2013 -2014 .

Revlon and Avon “asset utilization”


The asset utilization figures the aggregate income earned for each dollar of assets an organization claims.

For instance, with an asset utilization proportion of 52%, an organization earned $.52 for every dollar of asset held by the organization. An expanding asset utilization implies the organization is by and large more productive with every dollar of asset it has.

In relation to the above graph the Revlon asset utilization decrease constantly while for Avon it increases constantly within the margin of the industry level. In the year 2010-2011 there was an increase Avon’s asset utilization and dropped significantly in the year 2012. In the following year 2013 there was a slight increase in asset utilization in which in the fiscal year there was a drastic increase of 0.08.

In relation to Revlon’s company there was a drastic decrease since 2010 to 2014 hereby there was a 0.10 huge decrease from 2012 to 2013 within the industry margin.

Revlon and Avon “financial leverage”

Financial leverage Is the extent to which an organization uses settled wage securities, for example, obligation and favored value. The more obligations finance an organization utilizes, the higher its budgetary influence. A high level of money related influence implies high premium installments, which adversely influence the organization’s main concern profit per offer.

In relationship to the graph above Avon Inc use their debts to acquire additional asset in the business within the industry margin level on the other hand Revlon inc uses less of its debts to acquire additional assets .

The use of debts in Revlon is generally low while that of Avon is generally high in relation to the graph.

Capital structure


March 2014           march 2013               march 2012

order essay now!

Most recent (march 2015)        Historical

Type           %         amount Type Percentage Amount
Debt              156            1.8 bill Debt 144 1.9bill
Preferred        —–     ———– Preferred —- ———-
Equity           -55.6             -658.9 mill Equity -46.3 -589.0 mill


Firm     Ind Avg

Debt/Assets 0.98 0.33
Debt/Equity -1.16 0.61
Current Assets/Current Liability 1.75 1.14
EBITDA/Interest 3.59
Debt/EBITDA 6.25 2.15
Cash flow Ops/Total Debt 0.11 0.39




Firm     Ind Avg

Debt/Assets 0.53 0.33
Debt/Equity 8.98 0.61
Current Assets/Current Liability 1.29 1.14
EBITDA/Interest 4.65
Debt/EBITDA 4.96 2.15
Cash flow Ops/Total Debt 0.11 0.39

The working capital analysis “Revlon &Avon”


Working capital can be summed up in the following equation where by

Working capital = assets- liabilities (current)

The working capital proportion (Current Assets/Current Liabilities) shows whether an organization has enough fleeting resources for spread its transient obligation. Anything underneath 1 demonstrates negative W/C (working capital). While anything more than 2 implies that the organization is not contributing abundance resources. Most accept that a proportion somewhere around 1.2 and 2.0 is sufficient. Also known as “net working capital”.

On the off chance that an organization’s present resources don’t surpass its present liabilities, then it may keep running into inconvenience paying back leasers in the short term. The direst outcome imaginable is liquidation. A declining working capital proportion more than a more extended time period could likewise be a warning that warrants further examination. In this way, if an organization is not working in the most effective way (moderate accumulation), it will appear as an increment in the working capital.


Day receivable

  2010 2011 2012 2013 2014
AVON INC 27.76 37.54 38.05 36.87 33.90
REVLON INC 54.55 56.02 55.28 61.90 44.92
INDUSTRY 0.71 0.55 0.58 0.70 0.93


The table above illustrates the “day receivable” analysis between the two companies.

Day receivables are a measure of the normal time an organization’s clients take to pay for buys, equivalent to records receivable divided by yearly deals on layaway of credit multiplied 365

In relation to the above case the measure of Revlon Inc clients take to pay for buys is generally hire compared to Avon Inc in the FY14 on the industry margin.

The below graph represents the above information


Cash cycle conversion

  2010 2011 2012 2013 2014
AVON INC 25.19 33.09 19.96 17.42 0.84
REVLON INC 75.96 71.72 64.51 68.13 46.62
INDUSTRY 1.13 0.57 0.51 1.92 0.80


A metric that communicates the timeframe, in days, that it takes for the two companies to change over asset inputs into money streams. The money change cycle endeavors to quantify the measure of time every net information dollar is tied up in the creation and deals handle before it is changed over into money through deals to clients. This metric (in table above ) takes a gander at the measure of time expected to offer stock, the measure of time expected to gather receivables and the timeframe the two companies  are stood to pay their bills without bringing about punishments.

In the above table is evident that Avon Inc has the has the shortest cycle thus less time capital is tied up in the business procedure

The graph below represents the above information


Day’s inventory

A monetary measure of an organization’s execution that gives speculators a thought of to what extent it takes an organization to turn its stock (counting products that are work in advancement, if relevant) into deals. By and large, the lower (shorter) the DI the better, yet it is essential to note that the normal DI differs starting with one industry then onto the next.

  2010.00 2011.00 2012.00 2013.00 2014.00
AVON INC 70.57 70.35 62.95 67.26 59.81
REVLON INC 92.19 82.25 82.66 117.18 85.53
INDUSTRY 4.68 1.12 0.96 1.41 0.79




This measure is one piece of the money transformation cycle, which speaks to the procedure of transforming crude materials into money. The day’s offer of stock is the first stage in that procedure. The other two stages are day’s deals exceptional and day’s payable extraordinary. The main measures to what extent it takes an organization to get installment on records receivable, while the second measures to what extent it takes an organization to pay off its

records payable. In relation to the above statement in contrast to Revlon and Avon, Avon’s day inventory increase with decrease spontaneously while Revlon’s Day inventory drastically  increase in the FY13 as it remains constant between the 92-80 measure.

Day payable

Both Revlon and Avon must hit a fragile parity with DP. The more they take to pay their leasers, the more cash the organizations have close by, which is useful for working capital and free income. In any case, if both organizations take too long to pay its leasers, the banks will be miserable. They may decline to develop credit later on, or they may offer less positive terms. Additionally, on the grounds that a few banks give organizations a rebate for convenient installments, the organizations may be paying more than it needs to for its supplies. On the off chance that money is tight, in any case, the expense of expanding DP may be not exactly the expense of prior that money prior and needing to get the shortage to proceed with operations.

In relation to the case study company that is Avon and Revlon, it’s evident that Avon Inc’s DP shifts drastically in the industry constant margin each and every year as Revlon’s DP decrease with increase hence it’s not that stable. The table below represents the above information

  2010 2011 2012 2013 2014
AVON INC 73.14 74.80 81.05 86.71 92.87
REVLON INC 70.79 66.54 73.43 110.95 83.84
INDUSTRY 0.00 0.00 0.00 0.69 0.85


Moody’s bonding rating analysis


Avon’s Baa3 senior unsecured rating mirrors its position as one of the biggest worldwide direct offering organizations, solid brand acknowledgment in its esteem situated magnificence, design and home items, and expansive geographic broadening. These qualities are tempered by income and aggressive weights, a low EBIT edge, and danger that its circulation leeway in creating markets will disintegrate. These dangers feebly position the organization inside of the Baa3 rating. Avon is likewise presented to direct patterned swings as its items speak to more optional buys than numerous other non-strong shopper items. Avon’s expanding dependence on creating markets and nations, for example, Brazil, Russia, Mexico and Venezuela makes introduction to monetary and remote money variances that prompt more prominent profit instability. Dynamic agent tallies and units sold were negative in each of its four noteworthy geographic districts in each of the last seventy five percent. Grumpy’s accepts that this is characteristic of the broadness of the organization’s income and working weight, and will be hard to rapidly pivot.

Avon’s endeavors to enhance productivity is further tested by the seriously aggressive worldwide magnificence and individual consideration classifications, and the subsequent need to maintain large amounts of brand promoting, item improvement, and interests in illustrative enrollment, preparing and backing. Avon’s immediate deals model is a favorable technique for coming to customers in creating markets as conventional blocks and-mortar retail entrance is low in these districts.

Sizable money offset, positive anticipated free income, and the nonattendance of critical developments in 2014 and 2015 backing Avon’s liquidity position. Generously the majority of Avon’s $795 million of money is held seaward with generally $18 million held in Venezuela Bolivares that is not promptly open and an extra $300 million needed to bolster continuous operations. On the other hand, the rest of the money is open with negligible expense spillage. Irritable accepts that the money and an undrawn $1 billion pistol lapsing in March 2017 give adequate ability to Avon to store anticipated needs.


The update of Revlon’s Corporate Family appraising to B1 mirrors the organization’s capacity to support working and money related energy regardless of the continuous difficulties of the macroeconomic environment and strengthened aggressive environment. Revlon’s credit measurements keep on enhancing humbly determined by solid productivity and income era with further increases expected in monetary 2011.

Revlon’s B1 corporate family evaluating mirrors the organization’s worldwide image establishments, solid geographic and item enhancement for various surely understood brands in shading beauty care products, hair shading and scents, and maintained solid benefit (monetary 2010 EBITA edges of 16.2%) and income measurements (financial 2010 Free Cash Flow to Debt of 6.9%). Revlon’s appraisals are compelled by its still moderately high balanced influence (monetary 2010 Debt to EBITDA of 5.4 times) and restricted scale in the very aggressive beatufiers classification portrayed by profound took, substantial contenders.

order essay now!

We anticipate that Revlon’s productivity will be reasonable in spite of continuous speculations expected to keep up income development and piece of the pie including noteworthy item advancement, item show capital expenses and brand promoting and special spending. Be that as it may, Revlon’s evaluations will remain to some degree compelled by the very focused nature of the beauty care products and individual consideration classification in which it works and the organization’s still moderately high balanced influence (5.4 times).



  1. Datamonitor. (2008, March). Global Make-Up Market to 2011. Research andMarkets. Retrieved March 14, 2012, from
  2. Eichengreen, B. J., Gupta, P., & Kumar, R. (2010). Emerging giants: China and India in the world economy. New York: Oxford University Press.
  3. Foeth, M. (n.d.). Legal Obligations of Mexican Companies – Or How to Avoid Common Pitfalls – Corporate/Company Law – Mexico. Articles on All Regions, Law, Accountancy, Management Consultancy Issues. Retrieved March 15, 2012, from
  4. Kaufmann D., Kraay A., & Mastruzzi M. (2009). Governance matters VIII aggregate and individual governance indicators 1996-2008. Washington, D.C].: World Bank Development Research Group, Macroeconomics and Growth Team.
  5. Perry, W. (2007). Jane’s sentinel country risk assessments. Coulsdon, Surrey: Jane’s Information Group.
  6. Rao, R. (2010). Why do companies implement functional structures?. Global Online Corporate Community. Retrieved March 14, 2012, from


Best custom papers

Colleges and university tutors always overburden their students with so many academic papers to complete as a home works. Nowadays, while studying in college or university, the student has to score highly in his home writing assignments as well as in the normal coursework in order to get final passing grade for the entire course. Course grades in turn are vital in terms of GPA and thus, drastically important for one educational career. The resulting pressure of this situation is mostly associated with psychological stress and frustrations that students are experiencing during tight academic schedules, which is directly affecting students learning. However, now, students have more ways to handle their challenging home works and unload their tight schedules. Buy best custom papers written by our professional university writers and you will be free from all the worries associated with research papers and other assignments given to you by tutors in your higher academic institution.

Our service has your needs as the core issue of concern and we always do our best to help you achieve better academic results. We always ensure that we write for you a paper that earns you the best grades and match all highest academic standards of the modern educational world. Our writers are experts in their specific fields and this ensures that you get your paper done by the certified professional in your particular field of study. As such we are able to meet the academic needs of diverse groups; starting from high school and undergraduate paper assignments and up to PhD dissertations. We can do any paper type at any time. We always follow all given by you instructions of writing and ensure that your final paper match all required by you formats, styles of writing and manner of referencing. We are committed to academic honesty and issues of plagiarism are avoided completely, ensuring that you get absolutely unique writing which has been written especially for you and no one else. We guarantee entirely original custom papers which have no similar writings anywhere else in the world and thus, will never cause you issues which are associated with plagiarism. Our writers have access to private libraries and various archives including all of the largest academic databases. Thus, if you are suffering from failure to locate relevant study materials in your research topic, buy custom papers from us and ensure that your paper or your research supported by the most relevant evidences and materials required in academic writing of nowadays.

We pride ourselves in efficiency and effectiveness. We have the capacity to write papers in record time without compromising on the quality. Whether you need your paper to be done as fast as possible, whether you need it within a couple of hours, within a week or even within a month, we are always here to assist you and will always deliver your paper to you on time. Our service has your needs at heart and because of that matter we endeavor to write papers that guarantee the best grades and fully meet all requirements of your professor, college or university. Buy custom papers written by professional writers who will always be available for your consultation, questions, follow ups or inquiries. The charges for our custom papers are very affordable and are not dependent on the complexity of the assignment, but dependent on urgency of your order and your current academic level. With us you are sure that you will never be overcharged for your paper and your every time sure that you will get it cheaper than elsewhere, meanwhile quality will always remain high. We also offer discount services for regular and new customers, so you can enjoy our service even better. For instance, we are glad to announce that there are many of lifetime discounts available for the returning clients; we will help you to save more on your further orders. Writing is an art that our professionals are doing just perfectly and we thus, write papers that are rich in content, papers that are well designed, thoroughly researched and are very assertive. Let us help you meet your demands for good grades by the best custom papers we write and you will never regret it!

Hadoop in Action

Hadoop in Action

4.2 Hadoop blocks

The running Hadoop refers to sets of daemons and resident programs, which exist on a different server in the network. The daemons perform specific roles; a daemon may exist on a single server or across many servers. The daemons consist of NameNodes, DataNode, Secondary NameNodes, JobTrackers and TaskTrackers (Chuck, 2014). The daemons and their roles within the Hadoop form a significant part of the article as illustrated in the following discussion.

4.2.3Secondary NameNode

The Secondary NameNodes (SNN) are assistant daemons that monitor the states of the clusters HDFS. Notably, the Secondary NameNodes are similar to NameNodes in the sense that they have single clusters, which reside on their own machines as well.  Furthermore, another DataNode and TaskTracker daemons that operate on same servers do not exist. The Secondary NameNodes differ from the NameNodes in the sense that the processes do not receive or tabulate any real-time change to HDFS. However, they communicate with the NameNodes to take pictures of the HDFS metadata at an interval that is defined by a cluster configuration. Conversely, the NameNodes are points of failures for Hadoop clusters, and the SNN pictures help to reduce the downtime as well as data loss. Nevertheless, the NameNodes failure requires human interventions to re-construct the clusters to use a Secondary NameNode as a basic NameNode.


The JobTracker daemons are the links between the applications and the Hadoop. When the codes are submitted to the clusters, the JobTrackers establish an execution strategy by establishing the files that are processed to assign different tasks to the nodes and monitors the performance of the nodes. Incase of task failure, the JobTrackers may automatically re-start the tasks, possibly on separate nodes. Notably, single JobTracker daemons in one Hadoop cluster, which operate on servers as master nodes of the clusters.


As with a storage demon, a computing demon also follows the master/slave format: the JobTrackers are the masters overseeing the total execution of the MapReduce job whereas the TaskTrackers manages the execution involving single tasks on a slave node. This interaction is illustrated in Figure x.x. Single TaskTrackers are responsible for carrying out the individual jobs that are assigned by the JobTracker. Although there are single TaskTrackers on each slave node, a TaskTracker may encompass many JVMS that may handle maps and reduce the tasks concurrently. One of the responsibilities of TaskTrackers entails initiation of constant communication with the JobTrackers. If it fails to receive the heartbeats from the TaskTrackers within the specified time, it assumes that TaskTrackers have crashed and may re-send the corresponding activities to different nodes within the cluster.

  1. HDFS

5.x File Read and Write

The applications add data to HDFS by developing new files and filling it with the data. After the files are being closed, the written bytes may not be altered or deleted. Notably, new information might be incorporated to the files by reopening a file for appending. HDFS involve implementing the single writers, multiple-reader models. The HDFS clients that open files for writing are granted leases for the files; other clients cannot write in the files. The writing clients renew the leases periodically by sending heartbeats to the NameNodes. In addition, when the files are being closed, the leases are revoked.

Notably, the duration of the leases is bound by soft limitations and hard limitations. Before the expiry of soft limitations, the writers enjoy exclusive access towards files. Similarly, when the soft limitations expire and the clients fail to close some files and renew the leases, other clients may preempt the leases. Furthermore, when the hard limitations expire and the clients fail to renew the leases, HFDS may assume that the clients have quit and decide to close the files automatically to recover the leases. Concerning this, the leases of writers may not deter other customers from accessing the files; the files are being accessed concurrently.

The HDFS files consist of blocks. Notably, when there are necessities for new blocks, the NameNodes allocate the blocks with unique block identities and determine lists of DataNodes that can host the replica of the blocks. The DataNodes forms pipelines, which reduce the distance of total networks between the clients and the last DataNodes. Additionally, a byte is pushed through the pipeline by forming sequences of packets. Moreover, the bytes that are written by the applications buffer on the side of the client. Similarly, after being filled with the 64KB, the packet buffers’ data is pushed through the pipelines. The next packets may be pushed through the pipelines before the acknowledgements from the previous packet arrive. Seemingly, the quantities of outstanding packets are limited by an outstanding packet window size among the clients (Konstantin, Hairong, Sanjay and Robert, 2010).

Conversely, when the data is written to HDFS files, HDFS do not provide guarantees that the data will be accessed by new readers until the files close. Furthermore, the user applications ought to guarantee visibility, by explicitly calling the hflush operations. During this operation, the current packets are immediately pushed through the pipelines, thus the hflush operations may wait until the DataNodes on the pipeline accept the transmissions of the packet. Notably, all the information written prior to the hflush operations is specifically visible to the readers.



Similarly, when the errors do not occur, block constructions may go through the three stages, which are shown in Figure 2 thus illustrating pipelines of three DataNodes as well as blocks that contain five packets. In the pictures, the line in bold represents a packet of data, the line in dash represents an acknowledgement message and a thin line stand for control messages that sets up and closes the pipeline. Notably, a vertical line will represent an activity towards the client as well as the triple DataNodes when time moves from high to low. Conversely, the interval t1-t2 represents the stage of streaming data, whereby t1 stands for the time when the initial data packet transmits data whereas t2 stands for the time when the acknowledgement for the last packet receives data. It is worth to note that the hflush operations send the second packets. Furthermore, the hflush indications travel concurrently with the data packets and are separated during the operation. Moreover, the interval t2-t3 represents the pipelines close stages for the blocks.

In clusters that consist thousands nodes, the failure of nodes (storage faults) occur on a daily basis. The replicas that are being stored in the DataNode can be corrupted by the faults in the memory, diskette or network. Seemingly, the HDFS generate and store checksums for data blocks in the HDFS files. A checksum is verified by various HDFS clients while reading in order to detect corruption that is being caused by the clients, DataNodes and networks. Similarly, when the clients create the HDFS files, they compute the sequences of checksum for the blocks and send them to the DataNodes alongside the data. In addition, when the HDFS has read a file, the data of the blocks as well as the checksums are relayed to the clients. The clients compute the checksum for the received information and verify the recently computed checksums to ascertain whether they match with the received checksums. It is worth to note that, whenever the process fails, the clients might notify the NameNodes of the presence of corrupt replicas and then fetch different replicas of the blocks from different DataNodes.

Notably, when the clients open the files to read, they fetch the lists of various blocks as well as the location of the block replicas from the NameNodes. Furthermore, the locations of the blocks are arranged by the distances from the readers. When the reading attempt fails, the clients try the next replicas in the sequence. Seemingly, a read fails when the target DataNodes are not available, the nodes cannot host the replica of the blocks or when the replicas encourage corruption during the tests on checksums. Additionally, the HDFS allow clients to read files that support writing. When reading files that are open to writing, the length involving the previous block that is being written remains unknown towards the NameNodes. In this regard, the clients may ask one replica for the new length before reading its content. The designs of the HDFS 1/O are particularly optimized in a batch processing system, for instance the MapReduce that requires high input for sequential reading and writing. Similarly, there are improvements on reading and writing response times that support applications such as Scribe, which provide data streams to HDFS and HBase that provide random access to a large table.

5.x Block Placement

Notably, for large clusters, it is not practical to join the nodes in flat topologies. The common practices involve spreading the nodes through many racks. The nodes on the rack share switches, which are connected to one or multiple core switches. The communication involving two nodes on different racks should pass through several switches. Seemingly, the width of the network band between the nodes on one rack is bigger than the width of the network band between the nodes of different racks. Figure x.x gives a description of a cluster that has two racks, each containing three nodes.

Figure x.x cluster topology example


The HDFS estimate the width of the network band that exists between double nodes according to the distance. The distances from the node towards the nodes are one. The calculations of distances can be determined by adding their distances together to the nearest common ancestor. It is worth to note that short distances between the double nodes means they will use a greater bandwidth to transmit data (Konstantin, Hairong, Sanjay and Robert, 2010).

The HDFS allow the administrators to configure the scripts that return the nodes’ rack identity when they have the nodes’ address. The NameNodes are the central systems that resolve the rack locations of DataNodes. Seemingly, when DataNodes register with the NameNodes, the NameNodes run configured scripts that locate the rack where the nodes belong. Similarly, when the scripts are not being configured, the NameNodes assume that the node belongs to the defaulted single rack. The presence of the replica is crucial to HDFS data reliability as well as literacy performance. The policies of replica placement improve data reliability and utilization of the network bandwidth. Moreover, the HDFS provide configurable blocks placement policy interfaces which the user and the researcher experiments and tests the policy that is suitable to the applications.

The defaults in the HDFS blocks placement policies provide an exchange between reducing the writing costs and increasing data reliability, viability as well as total read bandwidth. Notably, when new blocks are being created, the HDFS inserts the first replicas in the nodes where the writers are being located whereas two replicas are inserted into two different nodes in the rack. Additionally, the remaining replicas are inserted into random nodes but there are restrictions in this process because each replica should match with one node.  Two replicas should not share the same racks when the replicas are twice less than the racks. The option of placing the two replicas on different racks distributes the block replica for the single files across the clusters. Seemingly, the first double replicas are placed on a similar rack, for the files; two-thirds in the block replicas should share the similar rack (Konstantin, Hairong, Sanjay and Robert, 2010).

When the target nodes are being selected, the nodes organize the pipeline depending on the proximity of the replicas. Notably, before reading starts, the NameNodes checks the clients’ host within the cluster. Seemingly, when the block location is returned to the clients, the block may be read from the DataNodes. It is common for the MapReduce application to run with cluster nodes, but when the host connects the NameNodes and DataNodes, it executes the HDFS clients. The policies reduce the inter-racks and inter-nodes writing traffic while improving the writing performance. The chances of rack failures are less than the node failures; the policies do not affect data reliability as well as availability guarantees. In the case involving three replicas, they reduce the total network bandwidth that is being used when data is read because blocks are being placed in two racks.  Summarily, the default HDFS replicas placement policies contain two important aspects; firstly, DataNodes do not have additional replicas in any block. Secondly, the replicas in the rack cannot exceed two in a single block, when the racks that are on the clusters are enough.

Replication management

The NameNodes ensures that the blocks have the required numbers of replica. The NameNodes detect blocks that are under-replicated and over-replicated when block reports from the DataNodes. Conversely, when blocks are over-replicated, the NameNodes choose the replicas to remove. Furthermore, the NameNodes preference may ignore the reduction of racks, which host the replicas. The second preference may involve the removal of replicas from the DataNodes using the space that is available on the disk. The objectives involve balancing storage utilization among the DataNodes but the blocks availability is not being reduced.

When the blocks are under-replicated, they are placed on the replica priority queue. Notably, the blocks with single replicas are being prioritized first whereas blocks with multiple replicas that exceed two thirds in the replication are prioritized last. Notably, the background threads scan the heads of the replica queues and choose the suitable places where new replicas are being placed. The block replications follow similar policies as the latest block placements. Seemingly, when the existing replicas are minimal, the HDFS inserts the subsequent replicas on different racks. In the event that the block has two replicas, which exist on a single rack, the third replicas are being placed on the different racks. Similarly, the third replicas are placed on different nodes but on the single rack of existing replicas. The goals entail reducing the costs of creating the new replicas (Konstantin, Hairong, Sanjay and Robert, 2010).

The NameNodes ensure that the replicas within the blocks are being located on single racks. When the NameNodes detect blocks’ replica that end on the same rack, the blocks are being treated as under-replicated and the blocks replicate to different racks using similar block placements policy. Conversely, when the NameNodes receive notifications that the replicas are being created, the blocks undergo replication. The NameNodes will decide to remove the old replicas because the over-replica policies do not reduce the numbers of racks.

  1. MapReduce

Reading and writing

The MapsReduce distributed processing, involves making certain assumptions concerning the data that is being processed. The processes provide flexibility when dealing with varieties of data format. Input data reside in the large file of over one hundred GBs. A fundamental principle of MapReduce processing power involves splitting the input data among chunks. The chunks can be processed concurrently using multiple mechanisms. Additionally, the chunks, which are being called splits, should have small sizes that are enough for granular parallelization. Notably, when the input data is placed into one split, parallelization does not occur.

According to Chuck (2014), the splits should not be small to ensure that the overhead for starting as well as stopping the process of splits becomes the fraction of executing time. The principles involved in the division of input data may include one massive file being split for parallel processes explain the design decision in the Hadoop FileSystems and the HDFS in particular. In addition, the Hadoop FileSystems provide the class FSDataInputStream for reading files rather than use Java’s The FSDataInputStream extends the DataInputStream to the random reading access features that MapReduce require before the machines begin processing the splits that sit at the centre of input files. Furthermore, lack of random access makes it cumbersome to read files from a beginning point up to the split. The HDFS design function involves storing data, which the MapReduce may split while processing in parallel. The HDFS store the files in blocks, which are being spread on multiple machines. Notably, a different machine will have a different block because the parallelization becomes automatic when the splits and blocks are being processed by a machine. Additionally, when the HDFS is replicating blocks using multiple nodes in order to enhance reliability, the MapReduce may choose the nodes that have copies of splits or blocks. Conversely, the Hadoop through defaults consider the lines on the input files to be records and the keys or value pairs are the bytes offset as well as contents of the lines.

10 Hadoop security

Arguably, huge data is not only a challenge when it comes to storage, processing and analysing data but also in terms of managing as well as securing large assets of data. Firstly, Hadoop does not have inbuilt security systems. When enterprises adopted Hadoop system, a security model based on Kerberos evolved. However, the distributing nature in the ecosystems coupled with wide ranging applications, which are being built at the peak of Hadoop, complicates the process of securing an enterprise.

Typical data ecosystems involve multiple stakeholders’ interaction with the systems. For example, experts within the organizations may interact in the ecosystems through the use of business intelligence as well as analytical tools. Similarly, business analysts in the finance department should not access data through the human resource department. Business intelligence mechanisms should constitute wide-ranging systems that provide level access in the Hadoop ecosystems, which depend on the protocols and data that is being used to communicate. Notably, the biggest challenge for the Big Data project within an enterprise concerns the security of integrated external sources of data such as CRM systems, existing ERP, websites and social blogs. The external connectivity should be established to ensure that the data, which is being extracted from the external sources that are available within the Hadoop ecosystems (Sudheesh, 2013).

10.1 Security challenges understanding

During the initial development of Hadoop, the aspect of security was not considered. Seemingly, the initial objective of developing Hadoop revolved around the management of a large amount of web data, data securities as well as privacy. Seemingly, there was an assumption that Hadoop clusters might involve the cooperation of trusted machines, which are used by users in a secure environment. Initially, the security model of authentication of users’ service and data privacy were not guaranteed by Hadoop. The arising scenario is being attributed to the fact that Hadoop design would involve execution of codes over distributed clusters of machines; this implies that any user could submit codes that are being executed. Despite the fact that auditing and file permissions were being implemented in the initial distributions, such authorized controls were circumvented because of user impersonation by switching the command lines. Due to the prevalence of impersonation cases, the security mechanisms were not effective in addressing the issue (Alexey, Kevin, Boris, 2013).

In the past, organizations that were concerned about the security anomalies arising from the Hadoop clusters by placing them on private networks and restricting access by authorized users. However, because the security controls within Hadoop were few, accidents and insecurity incidents were common in such an environment.  Similarly, users with good intentions can err by deleting data which is being used by other users. It is worth to note that distributed deletes may destroy huge data within a short time. Seemingly, users as well as programmers had similar levels of accessing data in the cluster, thus implying that jobs could access within the cluster and potential users could read any existing data set. The security lapse was causing concerns especially with the issue of confidentiality.  The MapReduce lacked authentication and authorization concepts thus allowing mischievous users to lower the priority of Hadoop jobs while attempting to complete other activities.

Conversely, when Hadoop was gaining popularity data analysts and security experts were beginning to express concern about the internal threat posed to Hadoop clusters by malicious users. The malicious developers could write codes that impersonate other potential users of Hadoop services. One technique used by malicious users involves writing and registering new TaskTrackers into Hadoop services or impersonation of the HDFS or MAPRED users by deleting all contents in the HDFS. The failure of the DataNodes to enforce access control allow malicious users to read random data blocks from the DataNodes. The outcome of the practice may undermine the data integrity during analysis. Additionally, the intruders can submit jobs to the JobTrackers compelling it to execute the tasks randomly.

Notably, as Hadoop was reaching its peak, stakeholders realized the significance of comprehensive security controls to be installed to Hadoop. The security experts were thinking of an authentication system that would require users, clients program as well as servers within the Hadoop clusters to confirm the identities.  Moreover, authorization was being cited as a necessity, along with particular security concerns that include auditing, privacy, integrity and confidentiality. However, there were other security issues that were not addressed because there was no authentication, thus authentication was a critical aspect that led to re-designing of Hadoop security.

The viability aspect of authentication, led to the introduction of Kerberos for Hadoop security by technicians from Yahoo. Firstly, the Kerberos security strategy recommends that the users can access the HDFS files when they have permission. Secondly, the users may access and modify the personal MapReduce jobs. Thirdly, the users should be authenticated as a way of preventing unauthorized TaskTrackers, JobTrackers, DataNodes and NameNodes. Furthermore, the services should undergo authentication in order to stop unauthorized service from joining the clusters. Finally, Kerberos tickets and credentials should be open to the user and applications. Notably, Kerberos was integrated into Hadoop as a mechanism for implementing an authentic network that is secure and controls Hadoop processes. Since the introduction of Kerberos, Hadoop and the tools involved in the Hadoop ecosystem have transformed the processes by providing security strategies, which meet the needs of modern users (Alexey, Kevin, Boris, 2013).


10.2 Hadoop Kerberos security implementation

Security enforcement within distributed systems such as Hadoop is complex. The requirements for securing Hadoop systems were proposed by expertise strategy for the Hadoop security design. Summarily, the security requirements include user-level access controls, Service-level access controls, user-service authentication, delegation token, and job token and block access token.

User-level access controls

The user-level access controls involve a number of recommendations that include; firstly, the users of Hadoop should access that belongs to them. Secondly, the users who have been authenticated can submit jobs through the Hadoop cluster. Thirdly, users can view, modify and eliminate the personal jobs. Furthermore, services that are authenticated should register either as DataNodes or as TaskTrackers. Finally, the Data block access in the DataNode should be secured by authenticated users who have access to the data in a Hadoop cluster.

Service-level access controls

The list of service-level access includes;

Ø  Scalable Authentication: Hadoop cluster consists of large numbers of nodes and the authentic models which are scalable in order to support large network authentication

Ø  Impersonation: A Hadoop service should impersonate the users that are submitting the jobs in order to maintain user isolation.

Ø  Self-Served: A Hadoop job runs for long duration, this ensures that the job is renewed for the delegated users’ authentication in completing the job.

Ø  Secure IPC: The services that are in the Hadoop should authenticate closely and ensure that the communication that takes place is secure.

Notably, the preceding conditions can be achieved when Hadoop leverages the Kerberos authentication protocols and the internally generated tokens are secured in the Hadoop clusters.

User and service authentication

Conversely, the authentication of the user to NameNodes and JobTrackers service is enabled through Hadoop remote procedures using the Simple Authentication and Security Layer framework. The use of Kerberos is important because it uses the authentication protocols to determine the authentic users within SASL. In addition, all the Hadoop services support Kerberos authentication. When clients submit the MapReduce jobs to the JobTracker, the MapReduce jobs assist the user to access the Hadoop resource. The process may be achieved using three types of tokens that include the Delegation Token, Job Token as well as the Block Access Token.

Delegation Token

Delegation Token authentication refers to a protocol that involves two parties and it is based on JAVA SASL Digest-MD5. The Delegation Tokens involve the users and NameNodes to determine the authentic user. Notably, when the NameNodes determine the authentic users using the Kerberos, the NameNodes provides the Delegation Token to the users. Similarly, the users who have the Delegation Token do not have to undergo the Kerberos authentication. The users also assign the JobTracker or the ResourceManager the process of the user by renewing the Delegation Token based on the Delegation Token request. Moreover, when authentication has been completed, the secured Delegation Tokens are sent to the JobTracker or ResourceManager. Furthermore, the JobTrackers assumes the role of users by using the Delegation Tokens to access the HDFS resources. When the JobTracker encounters a long-running job, it renews the delegation tokens.

Job Token

Similarly, the jobs run on Task Nodes and the users’ access is secured by the Task Nodes. When the users submit MapReduce jobs to the JobTrackers, they create secret keys that are shared with the TaskTrackers, which run the MapReduce Jobs. The secret keys constitute the Job Tokens. Notably, the Job Tokens are stored within the local disks of TaskTrackers that are permitted for users who submit the jobs. The TaskTrackers start the child JVM task using the user identity that submits the job. Therefore, the child JVM run will access the Job Tokens from the directories and communicate with the TaskTrackers using the Job Token. Additionally, the Job Tokens are used in ensuring that authenticated users submit the jobs in Hadoop and have access to the folders as well as jobs, which the local file systems of TaskNodes authorizes. When the Reduce job activates the task tracker, the TaskTracker runs the Map tasks and collects the mapper output file. Another important function of the Job Token is to combine with the TaskTrackers in securing communication.

Block Access Token

When clients request for the data from HDFS, they should fetch a data block from the DataNode after the identity of the block has been fetched from the NameNode. Furthermore, a secured system should be present to ensure that the privileges of the user are passed to the DataNode in a secure manner. Notably, the main function of the Block Access Token is ensuring that authorized users get access to the data blocks, which have been stored in the DataNodes. When the clients want to access data that is stored in the HDFS, it requests the NameNode to produce the block identities and DataNode locations. After completing the process, clients can contact the DataNode to fetch the blocks of data. The authentication of the NameNode is enforced at the DataNode by ensuring that Hadoop implements the Block Access Token. The Block Access Tokens are provided by NameNodes to Hadoop clients to carry data authentication information through the DataNode.

The Block Access Token implements symmetric key encryptions where the NameNodes and the DataNodes share common secret keys. When the DataNodes receive the secret keys, registration with the NameNodes takes place and the process is regenerated periodically. Notably, the Block Access Tokens are lightweight and contain access modes, blockIDs, ownerIDs, keyIDs and expirationDates. The access modes define the permissions that are available to some users for requesting the block ID. Furthermore, the BATs that are generated by the NameNodes are not renewable and should be fetched when the tokens expire. Seemingly, the BATs ensure that the security of data blocks that are in the DataNodes is upheld and authorized users may access a data block (Sudheesh, 2013). The following figure shows the various interactions in a secured Hadoop cluster:

Interactions in a secured Hadoop cluster

The overall Hadoop Kerberos operations depend on key steps. Firstly, all Hadoop services should be authentic in relation to KDC. During this process, DataNodes register with NameNodes whereas the TaskTrackers register with the JobTrackers. Furthermore, the NodeManagers register with the ResourceManagers. Secondly, the process involves client authentication with KDC. The clients request the service tickets for NameNodes and JobTrackers or ResourceManagers. Thirdly, in order for clients to have access to HDFS files there should be a connection with the NameNode server. Additionally, the NameNode determines the authentic clients and provide an authorization detail to clients alongside the BATs. The BATs are users that are required by the DataNodes to make the client authorization valid and provide authentic access to blocks that correspond. Finally, for the MapReduce jobs to submit in the Hadoop clusters, the clients request for the Delegation Tokens from the Job Trackers. The Delegation Tokens are used for sending MapReduce jobs to the clusters (Sudheesh, 2013).



Alexey, Y., Kevin, T. & Boris, L. (2013) Professional Hadoop Solutions, Wrox, Wrox University Press

Chuck, L. (2014). Hadoop in Action, Oxford, Oxford University Press (a)

Chuck, L. (2014). Hadoop in Action, Oxford, Oxford University Press (b)

Konstantin, S., Hairong, K., Sanjay, R. and Robert, C. (2010). The Hadoop Distributed File System. In Proceedings of the 26th IEEE Symposium on Mass Storage Systems and Technologies (MSST ’10), Incline Village, Nevada.(a)

Konstantin, S., Hairong, K., Sanjay, R. and Robert, C. (2010). The Hadoop Distributed File System. In Proceedings of the 26th IEEE Symposium on Mass Storage Systems and Technologies (MSST ’10), Incline Village, Nevada (b)

Konstantin, S., Hairong, K., Sanjay, R. and Robert, C. (2010). The Hadoop Distributed File System. In Proceedings of the 26th IEEE Symposium on Mass Storage Systems and Technologies (MSST ’10), Incline Village, Nevada (c)

Konstantin, S., Hairong, K., Sanjay, R. and Robert, C. (2010). The Hadoop Distributed File System. In Proceedings of the 26th IEEE Symposium on Mass Storage Systems and Technologies (MSST ’10), Incline Village, Nevada (d)

Sudheesh, N. (2013) Securing Hadoop, Bangalore. Bangalore University Press (a)

Sudheesh, N. (2013) Securing Hadoop, Bangalore. Bangalore University Press (b)