MANAGING CHANGE AND CREATIVITY IN AN ORGANIZATION

For businesses to thrive in the current business environment, it is important for them to adjust their approaches to business operations and management structures to nurture creativity and innovation. This paper discusses why hierarchical management structures are likely to be less evident in future, being replaced by structures that nurture innovation and creativity. The paper also discusses the shifts that have taken place in the global economy from being knowledge driven to creativity driven. Several factors that ought to be managed in order to enhance the creativity process in organizations have also been discussed, as well as the process of change management. Whilst the paper argues that hierarchical management structures shall diminish in the future, it has also pointed out some advantages that such structure have in organizations.


Contents

  1. Introduction. 1
  2. Changes in the Global Economy. 2

2.1.     The Shift from Knowledge Economy to Creativity Economy. 2

2.2.     Change in Management Structures. 4

  1. Factors that need to be managed in order to enhance creativity. 5

3.1.     Job-Level Factors. 6

3.2.     Team Factors. 6

3.3.     Organizational Level Factors. 7

  1. The Change Management Process. 8

4.1.     Impacts of the change management process. 11

  1. Conclusion. 11

 


1.      Introduction

The need for change and creativity has existed in organizations for a long time and continues to intensify. The current business environment is fast-moving and organizations that fail to adjust to this pace are likely to underperform (Brown and Osborne 2012). Embracing change and encouraging creativity in an organization partly depends on the management structures within the organization (Andriopoulos & Dawson, 2009). Whilst hierarchical management structures are still evident in many companies, they limit innovation and creativity (Foss et al., 2013). Therefore, this report supports the statement that “in future, hierarchical management structures will be less evident. The management of intellectual capital will require skills that nurture creativity and innovation in workforce rather than compliance as in the past.” Even though it is apparent that hierarchical management shall be less relevant in future, some of the strengths associated with it include clear promotional paths for employees and effective departmentalization of skills (Diefenbach & Todnem, 2012). As argued by Boje et al. (2012), changes take place within and outside the organizational environment. In the external business environment, there is a shift from the traditional knowledge-based economy to the creative industrial economy. In addition to this, competition among companies is shifting from being price driven to how creatively the goods and services are designed for clients (Cameron & Green, 2012). Within the organization, changes are taking place in nature of tasks and responsibilities of workers. Tasks are now more cognitive as opposed to the traditional repetitive tasks. Management styles and employee expectations are also among the organizational aspects that are changing (Myers et al., 2012). Since it is evident that change in organizations is inevitable, it is necessary for organizations to effectively manage the change processes (Beerel, 2009). In addition to supporting the argument that hierarchical management shall be less evident in future, this report also explains the factors that have to be managed to enhance innovation and creativity.

2.      Changes in the Global Economy

There are several shifts and changes that have taken place in the global economy. Of interest in this report include the transformation of the economy from being knowledge based to creativity based, change in business competition from price to innovative competition and the changes in management styles. An example that demonstrates the shift in global economy is the emergence of a wide range of creative industries, especially in the European region (Hesmondhalgh, 2002).

2.1.The Shift from Knowledge Economy to Creativity Economy

The current business environment is characterized by several dynamics that have made organizations change their approaches to business issues. To thrive in the present day corporate environment, companies are increasingly recognizing the need of incorporating creativity in their day-to-day activities (Cooke et al., 2012). Creativity is the process through which new ideas or alternatives for solving different issues are generated. Implementation of these ideas is referred to as innovation. According to Andriopoulos and Lowe (2000), business entities are categorized as being creative if they get their main income by generating novel ideas that are appropriate in tackling the needs of their target clients. Creativity is a process that goes through the several stages. These are the preparation, incubation, insight, evaluation and elaboration (Wallas, 1926).  On the other hand, Amabile (1983) suggests that creativity is a five-stage process that involves problem presentation, preparation, response generation, response validation and the outcome. Amabile’s (1983) model is summarized in the figure below.

Fig. 1: Amabile’s (1983) model of the creativity process

Adopted from: Amabile (1983)

There are several organizations that have prospered by encouraging their employees to be creative and creating a work environment that nurtures innovation. These companies include Google, Facebook, apple and Microsoft. Among ways in which companies are transforming towards being more creative is the elimination of hierarchical barriers that slow down the communication process and response to change. Eardley and Uden (2011) posit that hierarchical management structures are based on the notion that the management is supposed to create control, certainty and predictability. Even though bureaucracy has its advantages, the current business environment requires organizations to be flexible and ready to face unpredictable situations. This can only be achieved by encouraging creativity. As opposed to earlier times when competition between companies that offer the same service or product to clients was mostly based on price, creativity has also become an important aspect of competition. Creativity has been incorporated in advertising and other promotional techniques, product design, pricing strategies and distribution which are the key components of marketing (Slater et al., 2010).

Even though creativity and innovation is essential for survival in the current business environment, there are several setbacks that are associated with it. For instance, creativity involves taking risks with no certainty of a positive outcome. This is one of the reasons that make certain organizations to stick to hierarchical structures (Andriopoulos & Dawson, 2009).

2.2.Change in Management Structures

The need to encourage creativity in organizations has also had an effect on the management structures being used in present-day organizations. There is an ongoing shift from hierarchical management structures to adhocratic or flat management, where individuals are empowered to make initiatives that contribute to the development of organizations (Myers et al., 2012). Tseng (2010) argues that adhocracy is ideal for organizations that aim at increasing their flexibility so as to promptly take advantage of business opportunities. One of the organizations that use this management structure is Google, which has managed to be successful through allowing its employees to make innovative contributions to the company. For instance, the company has a policy where employees are allowed to spend 20% of their work time to concentrate on any project of their choosing. Some of the innovations that resulted from this policy at Google include Gmail, AdSense, Google maps and Gtalk among others (Kersten, 2009). This is as opposed to bureaucracy, where employee duties, responsibilities and daily schedules are usually set by the management and ought to be strictly adhered to. As opposed to the hierarchical management structure, adhocracy or flat management structures create an atmosphere where employees from different levels of the organization can freely communicate, learn from each other and also offer support where needed (Foss et al., 2013).

Even though many companies are adjusting their management structures to accommodate more creativity and innovation at the workplace, there are several disadvantages that are associated with flat or adhocratic structures. Since such management structures are designed for a flexible business environment, solving routine problems can be challenging in adhocratic organizations. Communication structures are also unclear in such organizations as opposed to hierarchical structures, where communication follows a predetermined path. Since fewer individuals are involved in strategic decision making processes in bureaucratic organizations, it takes a shorter time than flat structured organizations (Eden and Ackerman 2004).

3.      Factors that need to be managed in order to enhance creativity

Creativity in organizations can be affected by several factors. Depending on how these factors are managed, they can either limit or enhance creativity in an organization. Shalley and Gilson (2004) classify these factors into job-level factors, team factors and organizational factors. Even though it is vital to encourage creativity in the workplace, it is also worth noting that it also increases the risks associated with it.

3.1.Job-Level Factors

These factors include characteristics of the job and role expectations (Shalley & Gilson, 2004). Job characteristics have a great influence on the motivation of employees and their attitudes towards work. Amabile (1988) argues that this is among the vital components that managers should consider in creativity management because it affects the employee’s intrinsic motivations and creativity at work. Therefore, allocations of jobs should be effectively managed so that employees are allocated jobs with characteristics trigger their creativity. Shalley et al. (2000) established that complementing the work environment with the creative requirements of a job increases satisfaction levels of employees and reduces turnovers. Goals and expectations that managers set for employees have a direct impact on their creativity. Setting clearly defined goals for employees triggers their creativity as they device alternative ways of meeting the goals. On the other hand, when employees have no clear knowledge of what the management expects of them, they are likely to be less creative (Shalley et al., 2000). Employee motivation also encourages their creativity in the workplace. This can be done by recognizing innovative contributions that are made by individual employees or teams and rewarding them. Even though they will feel appreciated, it will motivate other employees to attempt making innovative contributions so that they can also be awarded (Harzing & Ruysseveldt, 2004). Other job-level factors that need to be effectively managed to enhance innovation include supervisory support, resource allocation and external job evaluations.

3.2.Team Factors

Teams in which employees are placed in the organization also have an effect on their creativity and innovation. Shalley and Gilson (2004) argue that whereas creativity may occur in isolation, it often occurs as a result of interaction among coworkers. In addition to this, opinions in support or criticism of ideas among team members affect their motivations for creativity. It has been proposed by researchers that creating diverse teams and ensuring effective communication among team members in the workplace encourages creativity. As diversity in organizations continues to increase with time, creation of diverse teams is becoming an easier task. Whilst diversity increases creativity, employees often prefer teaming up with those who are similar to themselves (Mumford, 2000). Therefore, achieving the desired levels of creativity in diverse groups requires a lot of effort in the early stages of group development.

3.3.Organizational Level Factors

One of the organizational-level factors that need to be effectively managed to encourage creativity is the organizational climate. This comprises of the values, traditions and beliefs of an organization (Shalley & Gilson, 2004). For managers to encourage creativity in the workplace, they need to foster an organizational climate that encourages experimentation and risk taking. This provides employees with the psychological safety and confidence in knowing that they shall not be blamed or punished for generating new ideas (Edmondson, 1999). The organizational structure also has a profound impact on the creativity of employees. According to Diefenbach and Todnem (2012), organizational structures that are too bureaucratic discourage employees from experimenting with new approaches for solving issues in the workplace. Therefore, encouraging creativity requires the organization to have a flatter structure with wider control span. Perception of conflict in the workplace is also a determinant of creativity levels within the organization. Creativity thrives in organizations that encourage constructive conflict because disagreement on certain ideas triggers generation of new ideas that are more appropriate and novel for the situation under the spotlight (Shalley & Gilson, 2004).

4.      The Change Management Process

As discussed in earlier sections of this report, change is an inevitable aspect for any organization that intends to maintain its relevance in its industry of operation. Some of the factors that prompt organizations to implement change on a constant basis include the increase in market globalization, rapid evolutions in technology and the need to increase creativity and innovation in organizational operations (Cameron & Green, 2012). Therefore, it is vital for process of change to be effectively managed so as to minimize the disruptions it may have on business operations in the organization. Organizational change management is the framework that is used to manage effects of change when a company decides to transition itself into a more desired state.

Organizational change has been studied by several researchers, who have made suggestions of steps that need to be undertaken in managing it for the most favorable results. One of the change management models was suggested by Harris (1975), in which he suggested five stages through which the change process should undergo. These are: the planning and initiation stage, the momentum stage, problems stage, the turning point and the termination stage. The planning and initiation stage involves the consideration of the goals to be achieved by the change process and estimation of resources needed. In the momentum stage, activities directed towards achieving the change objective begin to gain momentum. Interest and involvement of the people taking part in the change process also begins to increase in this phase. In the problems phase, the change process begins experiencing unexpected problems. These include insufficiency of some resources and difference perceptions of change objectives among employees, increasing the complexity of the process. The turning point stage is where problems earlier experienced are either overcome, and the original momentum is regained until the next stage. The final phase is the termination stage, where the change is completed, transforming the organization to the desired stage. Accomplishment of the change process requires effective management of individuals and resources through all the mentioned stages. Harris’ stages of management are summarized in the figure below.

Fig. 2: Harris’ Management process

Source: Harris (1975)

Another approach to the change management process was suggested by (Kotter, 1996). In his model, he suggests eight steps of managing the change process. These steps are summarized in the chart below.

 

Fig. 3: Kotter’s eight-stage model of change management

Adopted from Kotter (2007)

4.1.Impacts of the change management process

The outcomes of change management chiefly depend on how effectively the management process was carried out. Effective management benefits individual stakeholders and the overall organization. For individuals, it maintains their focus and morale in their contribution to the organization’s progress. It also helps them to easily adjust from the old state of the organization to the new transformed state.

Cost reduction is one of the key organizational benefits of change management (Andriopoulos & Dawson, 2009). Change processes in organizations are usually very costly and if they are not effectively managed, confusions or problems that may arise in the course of the change process can increase the expenditure of the company on the change process. This also applies to time and other resources involved in the change process. Another benefit that change management has for the organization is that it places it a better competitive position in the industry in product and service delivery (Kotter, 2007). It also minimizes the possible resistance to the change process through making the involved parties realize the benefits of the change process.

Ineffective change management, on the other hand, is likely to increase the costs incurred by the organization in undergoing all steps of the change process. Employees and customers can also offer resistance to the change process if they are not well informed and prepared for the change (Andriopoulos & Dawson, 2009). This can negatively affect the competitiveness of the company.

5.      Conclusion

Creativity is being embraced by many organizations across the globe to enable them to cope with the changes taking place in the economic environment. Some of the changes that have taken place include the shift from knowledge economy to creativity economy, changes in employee expectation, among others. Given that hierarchical management structures are unsupportive of creativity in organizations as presented in this report, it is definite that such management structures shall be replaced by those that encourage creativity. In addition to pointing out the need for creativity in the current organizational environment, this paper has discussed some of the factors that need to be managed to encourage creativity in organizations and the change management process. This report has presented a general view of the need to nurture creativity and innovation in the workplace. However, it is recommended that future research should focus on specific industries because some of them may benefit from hierarchical management structures.

 

References

Amabile, T., 1983. The Social Psychology of Creativity. New York: Springer Verlag.

Amabile, T.M., 1988. A model of creativity and innovation in organizations. In Staw, B.M. & Cummings, L.L. Research in Organizational Behavior. Greenwich, CT: JAI Press. p.123–167.

Andriopoulos, C. & Dawson, P., 2009. Managing Change, Creativity and Innovation. London: Sage Publications Ltd.

Andriopoulos, C. & Lowe, A., 2000. Enhancing organisational creativity: the process of perpetual challenging. Management Decision, 38(10), pp.734-42.

Beerel, A., 2009. Leadership and Change Management. London : Sage.

Boje, D., Burnes, B‎. & Hassard, ‎J., 2012. The Routledge Companion to Organizational Change. New York: Routledge.

Brown, K. & Osborne, ‎S.P., 2012. Managing Change and Innovation in Public Service Organizations. Routledge: Oxon.

Cameron, E. & Green, M., 2012. Making Sense of Change Management: A Complete Guide to the Models Tools and Techniques of Organizational Change. London: Kogan Page.

Cooke, P., Parrilli, ‎M.D. & Curbelo, J‎.L., 2012. Innovation, Global Change and Territorial Resilience. Glos: Edward Elgar Publishing.

Diefenbach, T. & Todnem, R., 2012. Reinventing Hierarchy and Bureaucracy: From the Bureau to Network Organizations. Bingley: Emerald Group Publishing.

Eardley, A. & Uden, L., 2011. Innovative Knowledge Management: Concepts for Organizational Creativity and Collaborative design. Hershey: IGI Global.

Eden, C. & Ackerman, F., 2004. Making Strategy: The Journey of Strategic Management. London: Sage Publications.

Edmondson, A.C., 1999. Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), pp.350-83.

Foss, L., Woll, K. & Moilanen, M., 2013. Creativity and implementations of new ideas: Do organisational structure, work environment and gender matter? International Journal of Gender and Entrepreneurship, 5(3), pp.298-322.

Harris, B.M., 1975. Supervisory behavior in education. Englewood Cliffs: Prentice Hall.

Harzing, A.-W. & van Ruysseveldt, J., 2004. International human resource management. California: Sage Publications Inc.

Hesmondhalgh, D., 2002. The Cultural Industries. New Jersey: SAGE.

Kersten, W., 2009. Supply Chain Performance Management: Current Approaches. Berlin: Erich Schmidt Verlag GmbH & Co.

Kotter, J.P., 1996. Leading change. Cambridge, MA: Harvard Business School Press.

Kotter, J.P., 2007. Leading Change: Why Transformation Efforts Fail. Harvard Business Review, pp.1-10.

Mumford, M.D., 2000. Managing creative people: Strategies and tactics for innovation. Human Resources Management Reviev, 10(3), pp.313-51.

Myers, P., Hulks, S. & Wiggins, L., 2012. Organizational Change: Perspectives on Theory and Practice. Oxford: Oxford University Press.

Shalley, C.E. & Gilson, L.L., 2004. What leaders need to know: A review of social and contextual factors that can foster or hinder creativity. The Leadeship Quaterly, 15, pp.33-53.

Shalley, C.E., Gilson, L.L. & Blum, T.C., 2000. Matching creativity requirements and the work environment: Effects on satisfaction and intentions to leave. Academy of Management Journal, 43, pp.215-23.

Slater, S.F., Hult, G.T.M. & Olson, E.M., 2010. Factors influencing the relative importance of marketing strategy creativity and marketing strategy implementation effectiveness. Industrial Marketing Management, 39(4), p.551–559.

Tseng, S.-M., 2010. The correlation between organizational culture and knowledge conversion on corporate performance. Journal of Knowledge Management, 14(2), pp.269-84.

Wallas, G., 1926. The Art of Thought. New York: Harcourt-Brace.

 

Hadoop in Action

Hadoop in Action

4.2 Hadoop blocks

The running Hadoop refers to sets of daemons and resident programs, which exist on a different server in the network. The daemons perform specific roles; a daemon may exist on a single server or across many servers. The daemons consist of NameNodes, DataNode, Secondary NameNodes, JobTrackers and TaskTrackers (Chuck, 2014). The daemons and their roles within the Hadoop form a significant part of the article as illustrated in the following discussion.

4.2.3Secondary NameNode

The Secondary NameNodes (SNN) are assistant daemons that monitor the states of the clusters HDFS. Notably, the Secondary NameNodes are similar to NameNodes in the sense that they have single clusters, which reside on their own machines as well.  Furthermore, another DataNode and TaskTracker daemons that operate on same servers do not exist. The Secondary NameNodes differ from the NameNodes in the sense that the processes do not receive or tabulate any real-time change to HDFS. However, they communicate with the NameNodes to take pictures of the HDFS metadata at an interval that is defined by a cluster configuration. Conversely, the NameNodes are points of failures for Hadoop clusters, and the SNN pictures help to reduce the downtime as well as data loss. Nevertheless, the NameNodes failure requires human interventions to re-construct the clusters to use a Secondary NameNode as a basic NameNode.

4.2.4JobTracker

The JobTracker daemons are the links between the applications and the Hadoop. When the codes are submitted to the clusters, the JobTrackers establish an execution strategy by establishing the files that are processed to assign different tasks to the nodes and monitors the performance of the nodes. Incase of task failure, the JobTrackers may automatically re-start the tasks, possibly on separate nodes. Notably, single JobTracker daemons in one Hadoop cluster, which operate on servers as master nodes of the clusters.

4.2.5TaskTracker

As with a storage demon, a computing demon also follows the master/slave format: the JobTrackers are the masters overseeing the total execution of the MapReduce job whereas the TaskTrackers manages the execution involving single tasks on a slave node. This interaction is illustrated in Figure x.x. Single TaskTrackers are responsible for carrying out the individual jobs that are assigned by the JobTracker. Although there are single TaskTrackers on each slave node, a TaskTracker may encompass many JVMS that may handle maps and reduce the tasks concurrently. One of the responsibilities of TaskTrackers entails initiation of constant communication with the JobTrackers. If it fails to receive the heartbeats from the TaskTrackers within the specified time, it assumes that TaskTrackers have crashed and may re-send the corresponding activities to different nodes within the cluster.

  1. HDFS

5.x File Read and Write

The applications add data to HDFS by developing new files and filling it with the data. After the files are being closed, the written bytes may not be altered or deleted. Notably, new information might be incorporated to the files by reopening a file for appending. HDFS involve implementing the single writers, multiple-reader models. The HDFS clients that open files for writing are granted leases for the files; other clients cannot write in the files. The writing clients renew the leases periodically by sending heartbeats to the NameNodes. In addition, when the files are being closed, the leases are revoked.

Notably, the duration of the leases is bound by soft limitations and hard limitations. Before the expiry of soft limitations, the writers enjoy exclusive access towards files. Similarly, when the soft limitations expire and the clients fail to close some files and renew the leases, other clients may preempt the leases. Furthermore, when the hard limitations expire and the clients fail to renew the leases, HFDS may assume that the clients have quit and decide to close the files automatically to recover the leases. Concerning this, the leases of writers may not deter other customers from accessing the files; the files are being accessed concurrently.

The HDFS files consist of blocks. Notably, when there are necessities for new blocks, the NameNodes allocate the blocks with unique block identities and determine lists of DataNodes that can host the replica of the blocks. The DataNodes forms pipelines, which reduce the distance of total networks between the clients and the last DataNodes. Additionally, a byte is pushed through the pipeline by forming sequences of packets. Moreover, the bytes that are written by the applications buffer on the side of the client. Similarly, after being filled with the 64KB, the packet buffers’ data is pushed through the pipelines. The next packets may be pushed through the pipelines before the acknowledgements from the previous packet arrive. Seemingly, the quantities of outstanding packets are limited by an outstanding packet window size among the clients (Konstantin, Hairong, Sanjay and Robert, 2010).

Conversely, when the data is written to HDFS files, HDFS do not provide guarantees that the data will be accessed by new readers until the files close. Furthermore, the user applications ought to guarantee visibility, by explicitly calling the hflush operations. During this operation, the current packets are immediately pushed through the pipelines, thus the hflush operations may wait until the DataNodes on the pipeline accept the transmissions of the packet. Notably, all the information written prior to the hflush operations is specifically visible to the readers.

 

 

Similarly, when the errors do not occur, block constructions may go through the three stages, which are shown in Figure 2 thus illustrating pipelines of three DataNodes as well as blocks that contain five packets. In the pictures, the line in bold represents a packet of data, the line in dash represents an acknowledgement message and a thin line stand for control messages that sets up and closes the pipeline. Notably, a vertical line will represent an activity towards the client as well as the triple DataNodes when time moves from high to low. Conversely, the interval t1-t2 represents the stage of streaming data, whereby t1 stands for the time when the initial data packet transmits data whereas t2 stands for the time when the acknowledgement for the last packet receives data. It is worth to note that the hflush operations send the second packets. Furthermore, the hflush indications travel concurrently with the data packets and are separated during the operation. Moreover, the interval t2-t3 represents the pipelines close stages for the blocks.

In clusters that consist thousands nodes, the failure of nodes (storage faults) occur on a daily basis. The replicas that are being stored in the DataNode can be corrupted by the faults in the memory, diskette or network. Seemingly, the HDFS generate and store checksums for data blocks in the HDFS files. A checksum is verified by various HDFS clients while reading in order to detect corruption that is being caused by the clients, DataNodes and networks. Similarly, when the clients create the HDFS files, they compute the sequences of checksum for the blocks and send them to the DataNodes alongside the data. In addition, when the HDFS has read a file, the data of the blocks as well as the checksums are relayed to the clients. The clients compute the checksum for the received information and verify the recently computed checksums to ascertain whether they match with the received checksums. It is worth to note that, whenever the process fails, the clients might notify the NameNodes of the presence of corrupt replicas and then fetch different replicas of the blocks from different DataNodes.

Notably, when the clients open the files to read, they fetch the lists of various blocks as well as the location of the block replicas from the NameNodes. Furthermore, the locations of the blocks are arranged by the distances from the readers. When the reading attempt fails, the clients try the next replicas in the sequence. Seemingly, a read fails when the target DataNodes are not available, the nodes cannot host the replica of the blocks or when the replicas encourage corruption during the tests on checksums. Additionally, the HDFS allow clients to read files that support writing. When reading files that are open to writing, the length involving the previous block that is being written remains unknown towards the NameNodes. In this regard, the clients may ask one replica for the new length before reading its content. The designs of the HDFS 1/O are particularly optimized in a batch processing system, for instance the MapReduce that requires high input for sequential reading and writing. Similarly, there are improvements on reading and writing response times that support applications such as Scribe, which provide data streams to HDFS and HBase that provide random access to a large table.

5.x Block Placement

Notably, for large clusters, it is not practical to join the nodes in flat topologies. The common practices involve spreading the nodes through many racks. The nodes on the rack share switches, which are connected to one or multiple core switches. The communication involving two nodes on different racks should pass through several switches. Seemingly, the width of the network band between the nodes on one rack is bigger than the width of the network band between the nodes of different racks. Figure x.x gives a description of a cluster that has two racks, each containing three nodes.

Figure x.x cluster topology example

 

The HDFS estimate the width of the network band that exists between double nodes according to the distance. The distances from the node towards the nodes are one. The calculations of distances can be determined by adding their distances together to the nearest common ancestor. It is worth to note that short distances between the double nodes means they will use a greater bandwidth to transmit data (Konstantin, Hairong, Sanjay and Robert, 2010).

The HDFS allow the administrators to configure the scripts that return the nodes’ rack identity when they have the nodes’ address. The NameNodes are the central systems that resolve the rack locations of DataNodes. Seemingly, when DataNodes register with the NameNodes, the NameNodes run configured scripts that locate the rack where the nodes belong. Similarly, when the scripts are not being configured, the NameNodes assume that the node belongs to the defaulted single rack. The presence of the replica is crucial to HDFS data reliability as well as literacy performance. The policies of replica placement improve data reliability and utilization of the network bandwidth. Moreover, the HDFS provide configurable blocks placement policy interfaces which the user and the researcher experiments and tests the policy that is suitable to the applications.

The defaults in the HDFS blocks placement policies provide an exchange between reducing the writing costs and increasing data reliability, viability as well as total read bandwidth. Notably, when new blocks are being created, the HDFS inserts the first replicas in the nodes where the writers are being located whereas two replicas are inserted into two different nodes in the rack. Additionally, the remaining replicas are inserted into random nodes but there are restrictions in this process because each replica should match with one node.  Two replicas should not share the same racks when the replicas are twice less than the racks. The option of placing the two replicas on different racks distributes the block replica for the single files across the clusters. Seemingly, the first double replicas are placed on a similar rack, for the files; two-thirds in the block replicas should share the similar rack (Konstantin, Hairong, Sanjay and Robert, 2010).

When the target nodes are being selected, the nodes organize the pipeline depending on the proximity of the replicas. Notably, before reading starts, the NameNodes checks the clients’ host within the cluster. Seemingly, when the block location is returned to the clients, the block may be read from the DataNodes. It is common for the MapReduce application to run with cluster nodes, but when the host connects the NameNodes and DataNodes, it executes the HDFS clients. The policies reduce the inter-racks and inter-nodes writing traffic while improving the writing performance. The chances of rack failures are less than the node failures; the policies do not affect data reliability as well as availability guarantees. In the case involving three replicas, they reduce the total network bandwidth that is being used when data is read because blocks are being placed in two racks.  Summarily, the default HDFS replicas placement policies contain two important aspects; firstly, DataNodes do not have additional replicas in any block. Secondly, the replicas in the rack cannot exceed two in a single block, when the racks that are on the clusters are enough.

Replication management

The NameNodes ensures that the blocks have the required numbers of replica. The NameNodes detect blocks that are under-replicated and over-replicated when block reports from the DataNodes. Conversely, when blocks are over-replicated, the NameNodes choose the replicas to remove. Furthermore, the NameNodes preference may ignore the reduction of racks, which host the replicas. The second preference may involve the removal of replicas from the DataNodes using the space that is available on the disk. The objectives involve balancing storage utilization among the DataNodes but the blocks availability is not being reduced.

When the blocks are under-replicated, they are placed on the replica priority queue. Notably, the blocks with single replicas are being prioritized first whereas blocks with multiple replicas that exceed two thirds in the replication are prioritized last. Notably, the background threads scan the heads of the replica queues and choose the suitable places where new replicas are being placed. The block replications follow similar policies as the latest block placements. Seemingly, when the existing replicas are minimal, the HDFS inserts the subsequent replicas on different racks. In the event that the block has two replicas, which exist on a single rack, the third replicas are being placed on the different racks. Similarly, the third replicas are placed on different nodes but on the single rack of existing replicas. The goals entail reducing the costs of creating the new replicas (Konstantin, Hairong, Sanjay and Robert, 2010).

The NameNodes ensure that the replicas within the blocks are being located on single racks. When the NameNodes detect blocks’ replica that end on the same rack, the blocks are being treated as under-replicated and the blocks replicate to different racks using similar block placements policy. Conversely, when the NameNodes receive notifications that the replicas are being created, the blocks undergo replication. The NameNodes will decide to remove the old replicas because the over-replica policies do not reduce the numbers of racks.

  1. MapReduce

Reading and writing

The MapsReduce distributed processing, involves making certain assumptions concerning the data that is being processed. The processes provide flexibility when dealing with varieties of data format. Input data reside in the large file of over one hundred GBs. A fundamental principle of MapReduce processing power involves splitting the input data among chunks. The chunks can be processed concurrently using multiple mechanisms. Additionally, the chunks, which are being called splits, should have small sizes that are enough for granular parallelization. Notably, when the input data is placed into one split, parallelization does not occur.

According to Chuck (2014), the splits should not be small to ensure that the overhead for starting as well as stopping the process of splits becomes the fraction of executing time. The principles involved in the division of input data may include one massive file being split for parallel processes explain the design decision in the Hadoop FileSystems and the HDFS in particular. In addition, the Hadoop FileSystems provide the class FSDataInputStream for reading files rather than use Java’s java.io.DataInputStream. The FSDataInputStream extends the DataInputStream to the random reading access features that MapReduce require before the machines begin processing the splits that sit at the centre of input files. Furthermore, lack of random access makes it cumbersome to read files from a beginning point up to the split. The HDFS design function involves storing data, which the MapReduce may split while processing in parallel. The HDFS store the files in blocks, which are being spread on multiple machines. Notably, a different machine will have a different block because the parallelization becomes automatic when the splits and blocks are being processed by a machine. Additionally, when the HDFS is replicating blocks using multiple nodes in order to enhance reliability, the MapReduce may choose the nodes that have copies of splits or blocks. Conversely, the Hadoop through defaults consider the lines on the input files to be records and the keys or value pairs are the bytes offset as well as contents of the lines.

10 Hadoop security

Arguably, huge data is not only a challenge when it comes to storage, processing and analysing data but also in terms of managing as well as securing large assets of data. Firstly, Hadoop does not have inbuilt security systems. When enterprises adopted Hadoop system, a security model based on Kerberos evolved. However, the distributing nature in the ecosystems coupled with wide ranging applications, which are being built at the peak of Hadoop, complicates the process of securing an enterprise.

Typical data ecosystems involve multiple stakeholders’ interaction with the systems. For example, experts within the organizations may interact in the ecosystems through the use of business intelligence as well as analytical tools. Similarly, business analysts in the finance department should not access data through the human resource department. Business intelligence mechanisms should constitute wide-ranging systems that provide level access in the Hadoop ecosystems, which depend on the protocols and data that is being used to communicate. Notably, the biggest challenge for the Big Data project within an enterprise concerns the security of integrated external sources of data such as CRM systems, existing ERP, websites and social blogs. The external connectivity should be established to ensure that the data, which is being extracted from the external sources that are available within the Hadoop ecosystems (Sudheesh, 2013).

10.1 Security challenges understanding

During the initial development of Hadoop, the aspect of security was not considered. Seemingly, the initial objective of developing Hadoop revolved around the management of a large amount of web data, data securities as well as privacy. Seemingly, there was an assumption that Hadoop clusters might involve the cooperation of trusted machines, which are used by users in a secure environment. Initially, the security model of authentication of users’ service and data privacy were not guaranteed by Hadoop. The arising scenario is being attributed to the fact that Hadoop design would involve execution of codes over distributed clusters of machines; this implies that any user could submit codes that are being executed. Despite the fact that auditing and file permissions were being implemented in the initial distributions, such authorized controls were circumvented because of user impersonation by switching the command lines. Due to the prevalence of impersonation cases, the security mechanisms were not effective in addressing the issue (Alexey, Kevin, Boris, 2013).

In the past, organizations that were concerned about the security anomalies arising from the Hadoop clusters by placing them on private networks and restricting access by authorized users. However, because the security controls within Hadoop were few, accidents and insecurity incidents were common in such an environment.  Similarly, users with good intentions can err by deleting data which is being used by other users. It is worth to note that distributed deletes may destroy huge data within a short time. Seemingly, users as well as programmers had similar levels of accessing data in the cluster, thus implying that jobs could access within the cluster and potential users could read any existing data set. The security lapse was causing concerns especially with the issue of confidentiality.  The MapReduce lacked authentication and authorization concepts thus allowing mischievous users to lower the priority of Hadoop jobs while attempting to complete other activities.

Conversely, when Hadoop was gaining popularity data analysts and security experts were beginning to express concern about the internal threat posed to Hadoop clusters by malicious users. The malicious developers could write codes that impersonate other potential users of Hadoop services. One technique used by malicious users involves writing and registering new TaskTrackers into Hadoop services or impersonation of the HDFS or MAPRED users by deleting all contents in the HDFS. The failure of the DataNodes to enforce access control allow malicious users to read random data blocks from the DataNodes. The outcome of the practice may undermine the data integrity during analysis. Additionally, the intruders can submit jobs to the JobTrackers compelling it to execute the tasks randomly.

Notably, as Hadoop was reaching its peak, stakeholders realized the significance of comprehensive security controls to be installed to Hadoop. The security experts were thinking of an authentication system that would require users, clients program as well as servers within the Hadoop clusters to confirm the identities.  Moreover, authorization was being cited as a necessity, along with particular security concerns that include auditing, privacy, integrity and confidentiality. However, there were other security issues that were not addressed because there was no authentication, thus authentication was a critical aspect that led to re-designing of Hadoop security.

The viability aspect of authentication, led to the introduction of Kerberos for Hadoop security by technicians from Yahoo. Firstly, the Kerberos security strategy recommends that the users can access the HDFS files when they have permission. Secondly, the users may access and modify the personal MapReduce jobs. Thirdly, the users should be authenticated as a way of preventing unauthorized TaskTrackers, JobTrackers, DataNodes and NameNodes. Furthermore, the services should undergo authentication in order to stop unauthorized service from joining the clusters. Finally, Kerberos tickets and credentials should be open to the user and applications. Notably, Kerberos was integrated into Hadoop as a mechanism for implementing an authentic network that is secure and controls Hadoop processes. Since the introduction of Kerberos, Hadoop and the tools involved in the Hadoop ecosystem have transformed the processes by providing security strategies, which meet the needs of modern users (Alexey, Kevin, Boris, 2013).

 

10.2 Hadoop Kerberos security implementation

Security enforcement within distributed systems such as Hadoop is complex. The requirements for securing Hadoop systems were proposed by expertise strategy for the Hadoop security design. Summarily, the security requirements include user-level access controls, Service-level access controls, user-service authentication, delegation token, and job token and block access token.

User-level access controls

The user-level access controls involve a number of recommendations that include; firstly, the users of Hadoop should access that belongs to them. Secondly, the users who have been authenticated can submit jobs through the Hadoop cluster. Thirdly, users can view, modify and eliminate the personal jobs. Furthermore, services that are authenticated should register either as DataNodes or as TaskTrackers. Finally, the Data block access in the DataNode should be secured by authenticated users who have access to the data in a Hadoop cluster.

Service-level access controls

The list of service-level access includes;

Ø  Scalable Authentication: Hadoop cluster consists of large numbers of nodes and the authentic models which are scalable in order to support large network authentication

Ø  Impersonation: A Hadoop service should impersonate the users that are submitting the jobs in order to maintain user isolation.

Ø  Self-Served: A Hadoop job runs for long duration, this ensures that the job is renewed for the delegated users’ authentication in completing the job.

Ø  Secure IPC: The services that are in the Hadoop should authenticate closely and ensure that the communication that takes place is secure.

Notably, the preceding conditions can be achieved when Hadoop leverages the Kerberos authentication protocols and the internally generated tokens are secured in the Hadoop clusters.

User and service authentication

Conversely, the authentication of the user to NameNodes and JobTrackers service is enabled through Hadoop remote procedures using the Simple Authentication and Security Layer framework. The use of Kerberos is important because it uses the authentication protocols to determine the authentic users within SASL. In addition, all the Hadoop services support Kerberos authentication. When clients submit the MapReduce jobs to the JobTracker, the MapReduce jobs assist the user to access the Hadoop resource. The process may be achieved using three types of tokens that include the Delegation Token, Job Token as well as the Block Access Token.

Delegation Token

Delegation Token authentication refers to a protocol that involves two parties and it is based on JAVA SASL Digest-MD5. The Delegation Tokens involve the users and NameNodes to determine the authentic user. Notably, when the NameNodes determine the authentic users using the Kerberos, the NameNodes provides the Delegation Token to the users. Similarly, the users who have the Delegation Token do not have to undergo the Kerberos authentication. The users also assign the JobTracker or the ResourceManager the process of the user by renewing the Delegation Token based on the Delegation Token request. Moreover, when authentication has been completed, the secured Delegation Tokens are sent to the JobTracker or ResourceManager. Furthermore, the JobTrackers assumes the role of users by using the Delegation Tokens to access the HDFS resources. When the JobTracker encounters a long-running job, it renews the delegation tokens.

Job Token

Similarly, the jobs run on Task Nodes and the users’ access is secured by the Task Nodes. When the users submit MapReduce jobs to the JobTrackers, they create secret keys that are shared with the TaskTrackers, which run the MapReduce Jobs. The secret keys constitute the Job Tokens. Notably, the Job Tokens are stored within the local disks of TaskTrackers that are permitted for users who submit the jobs. The TaskTrackers start the child JVM task using the user identity that submits the job. Therefore, the child JVM run will access the Job Tokens from the directories and communicate with the TaskTrackers using the Job Token. Additionally, the Job Tokens are used in ensuring that authenticated users submit the jobs in Hadoop and have access to the folders as well as jobs, which the local file systems of TaskNodes authorizes. When the Reduce job activates the task tracker, the TaskTracker runs the Map tasks and collects the mapper output file. Another important function of the Job Token is to combine with the TaskTrackers in securing communication.

Block Access Token

When clients request for the data from HDFS, they should fetch a data block from the DataNode after the identity of the block has been fetched from the NameNode. Furthermore, a secured system should be present to ensure that the privileges of the user are passed to the DataNode in a secure manner. Notably, the main function of the Block Access Token is ensuring that authorized users get access to the data blocks, which have been stored in the DataNodes. When the clients want to access data that is stored in the HDFS, it requests the NameNode to produce the block identities and DataNode locations. After completing the process, clients can contact the DataNode to fetch the blocks of data. The authentication of the NameNode is enforced at the DataNode by ensuring that Hadoop implements the Block Access Token. The Block Access Tokens are provided by NameNodes to Hadoop clients to carry data authentication information through the DataNode.

The Block Access Token implements symmetric key encryptions where the NameNodes and the DataNodes share common secret keys. When the DataNodes receive the secret keys, registration with the NameNodes takes place and the process is regenerated periodically. Notably, the Block Access Tokens are lightweight and contain access modes, blockIDs, ownerIDs, keyIDs and expirationDates. The access modes define the permissions that are available to some users for requesting the block ID. Furthermore, the BATs that are generated by the NameNodes are not renewable and should be fetched when the tokens expire. Seemingly, the BATs ensure that the security of data blocks that are in the DataNodes is upheld and authorized users may access a data block (Sudheesh, 2013). The following figure shows the various interactions in a secured Hadoop cluster:

Interactions in a secured Hadoop cluster

The overall Hadoop Kerberos operations depend on key steps. Firstly, all Hadoop services should be authentic in relation to KDC. During this process, DataNodes register with NameNodes whereas the TaskTrackers register with the JobTrackers. Furthermore, the NodeManagers register with the ResourceManagers. Secondly, the process involves client authentication with KDC. The clients request the service tickets for NameNodes and JobTrackers or ResourceManagers. Thirdly, in order for clients to have access to HDFS files there should be a connection with the NameNode server. Additionally, the NameNode determines the authentic clients and provide an authorization detail to clients alongside the BATs. The BATs are users that are required by the DataNodes to make the client authorization valid and provide authentic access to blocks that correspond. Finally, for the MapReduce jobs to submit in the Hadoop clusters, the clients request for the Delegation Tokens from the Job Trackers. The Delegation Tokens are used for sending MapReduce jobs to the clusters (Sudheesh, 2013).

 

References

Alexey, Y., Kevin, T. & Boris, L. (2013) Professional Hadoop Solutions, Wrox, Wrox University Press

Chuck, L. (2014). Hadoop in Action, Oxford, Oxford University Press (a)

Chuck, L. (2014). Hadoop in Action, Oxford, Oxford University Press (b)

Konstantin, S., Hairong, K., Sanjay, R. and Robert, C. (2010). The Hadoop Distributed File System. In Proceedings of the 26th IEEE Symposium on Mass Storage Systems and Technologies (MSST ’10), Incline Village, Nevada.(a)

Konstantin, S., Hairong, K., Sanjay, R. and Robert, C. (2010). The Hadoop Distributed File System. In Proceedings of the 26th IEEE Symposium on Mass Storage Systems and Technologies (MSST ’10), Incline Village, Nevada (b)

Konstantin, S., Hairong, K., Sanjay, R. and Robert, C. (2010). The Hadoop Distributed File System. In Proceedings of the 26th IEEE Symposium on Mass Storage Systems and Technologies (MSST ’10), Incline Village, Nevada (c)

Konstantin, S., Hairong, K., Sanjay, R. and Robert, C. (2010). The Hadoop Distributed File System. In Proceedings of the 26th IEEE Symposium on Mass Storage Systems and Technologies (MSST ’10), Incline Village, Nevada (d)

Sudheesh, N. (2013) Securing Hadoop, Bangalore. Bangalore University Press (a)

Sudheesh, N. (2013) Securing Hadoop, Bangalore. Bangalore University Press (b)

 

 

Lithium Rechargeable Batteries

In this term paper, a discussion on how lithium batteries work will be tabled and the conclusion will be made on how they work. However, not only the functions of the lithium batteries will be discussed but also what components and materials are used to make them work as required.

Introduction

Before talking about what a lithium battery is and how it works, we will first have to know what a battery is for us to be able to understand it better. In a brief way, the battery can be said to be an electric device which has electrochemical cells that are used to transform stored chemical energy into electrical energy. The cells contain both positive and negative terminals which allow the movement of ions and therefore enhancing the flow of current in the cell. There are two types of cells, which can be categorized as primary and secondary. These batteries have different features, but their functions are the same. For a primary cell a user has to use them only once then dispose them after they are exhausted, one of the best examples of this batteries are the alkaline batteries used in watches and torches. While, on the other hand, secondary batteries do not have to be disposed after use since they can be recharged more than one time when they exhaust power. A good example of this kind of cell is the acid battery which is used in vehicles and also most of the lithium batteries we find on portable electronic devices.

When many people hear the word batteries they only imagine of a small object used to power small devices not knowing that batteries can be of different sizes depending on what they are supposed to power. This can be elaborated further by different batteries such as the mini batteries we see in wristwatches; these are familiar to many people and can be said to be the smallest types of cells while on the other hand we have power banks which are rechargeable batteries used in large institution for power backup in case of a power shortage and this can be as large as a room to be specific. When comparing batteries to other forms of power we can say that, batteries tend to have a lower energy specification since they have to do all conversions and deliver their energy in the form of electricity.

When we talk about lithium batteries, it refers to disposable batteries that can also be said to be primary batteries that have lithium metal as an anode. When looking at the specifics of lithium batteries we can say that the cells are meant to produce low power voltages of about 1.5V to 3.7V and this all depends on the designs and also the types of compounds used to manufacture the lithium battery. However, lithium batteries can be said to be different from another kind of cells due to their capability to stay with power for long periods of time. When comparing lithium batteries to lithium-ion batteries which are rechargeable, we can say that for lithium-ion batteries the ions are designed to move from the cathode to anode by the use of reversible insertion of ions into a compound unlike in lithium batteries which use metallic lithium.

Theory

Lithium batteries can be said to have been invented by M. Whittingham back in 1970. However according to Whittingham he used titanium sulfide mixed with lithium as the electric conductors. However, batteries that used lithium as a reactor were supposed to be dangerous. This because  metallic lithium is a very high reactor and  safety measures had to be taken while working with metallic lithium, and by this, the team that was under research proposed that they be using lithium instead of metallic lithium. There are three major components that are functional in a Li-ion battery these are the cathode, the anode and the electrolytes. The anode side of a lithium-ion battery consists of carbon. On the other side, the cathode is a metal oxide while the electrolyte is composed of lithium salt in an organic solution that can dissolve. Electrodes come in many forms depending on if they are negative or positive, one of the mostly used negative electrode is graphite, while on the positive side electrodes can be in three different materials i.e.: spinel, polyanion and last is layered oxide. Electrolytes are a mixture of several materials such as organic carbonates or ethylene carbonate. Since electrolytes do not contain water elements they are forced to use anion salts such as hexafluorophosphate, lithium and many others since it all depends on the materials you like. From the look of things we can say lithium batteries are more expensive but on the brighter side they can be functional over wide temperature ranges and also have high energy density, however, they should always have a protective circuit which hinders the battery from exploding or discharging high energy since it limits its voltage maximum power.

For portable electronic devices such as a laptop or iPad, they are fitted in with lithium-ion battery which comes with temperature sensors which are used to regulate the amount of energy to be taken in by the machine and also stop it from overheating when charging.

Lithium-ion batteries which can also be referred to as Li-ion batteries can be categorized in the rechargeable family, whereby lithium ions in the battery have to move negative to positive electrodes when the battery is discharging stored energy and vice versa when charging the battery. Unlike lithium batteries that use metallic lithium, Li-ion are designed to use intercalated form of lithium. The consistency of lithium-ion batteries plays a big role in the movement of ions through the help of electrolytes. This kind of batteries are mostly found on portable electronics and can be said to be the most common type of rechargeable batteries for most kinds of electronics, they can be said to have a low energy loss system when not being used and they also have a high energy storage capacity as compared to their small sizes and also they have no short term energy loss and this means even if you switch the electronic device off by the time you turn it on the amount of energy stored will still be the same. However, lithium-ion batteries are slowly taking over the market and expanding their use and capabilities. They are also being used in military for powering some of their electronic machineries and cars and also on some aerospace equipment. As technology advances, most companies are also looking forward to replacing the use of lead acid batteries with lithium-ion ones when manufacturing vehicles and all other machines that use lead-acid batteries as a source of power, the point in this is just coming up with a lithium-ion which is smaller in size and can be able to sustain the same amount of energy as the lead acid batteries.

For lithium-ion batteries to charge or discharge energy, there are several stages that they have to go through first. When the lithium-ion battery charges there must be a presence of an external electrical power source or basically a charger which should supply the battery with enough amount of voltage which should be higher than the amount of voltage produced by the battery and there causing the energy being produced by the battery flow backwards as opposed by the charger. Through charging the lithium-ion battery, the ions in the battery are forced to flow from the cathode to anode and eventually become firmly fixed to the porous electrode object. On the other hand, during discharge of energy in a lithium-ion battery, the flow of current is triggered by lithium-ions, which direct the current from anode to cathode through a solution which its water of crystallization has been removed.

When charging a single Li-ion battery and a complete Li-ion battery the processes taken during the charging are a bit different. When it comes to a single Li-ion battery, there are two steps that take place while charging, these are the constant current and voltage source. On the other hand, a complete Li-ion battery is charged through three processes that are, constant current, balance which is not needed when the battery is balanced and finally voltage source. In these processes one can say that during a constant current phase, the charger discharges a constant current which is supplied to the battery in an equally raising voltage, until the voltage limit supposed to be applied in a single cell is reached. However, when charging the battery the charger will reduce the charging power when it reaches the balanced phase, and after doing this the state of all cells is taken to the same level to each other by a balancing circuit to such a point that all the battery gets to the same level and becomes balanced. On the constant voltage stage, when charging the charger uses a voltage that is the same as the maximum cell voltage multiplied to the number of cells in the battery, this is because current reduces as it goes down to zero until the current is under the minimum set amount. When charging a lithium-ion battery it is always advisable to check its voltage limits since charging it with a charger which has more power voltage than the battery might cause and explosion or damage of both the battery and the charger.

On lithium-ion batteries charging can also be affected by temperature and other factors. However, the batteries do not have restrictions on when and where to use, and this is an advantage they come with since you can use them under all kinds of temperatures. The only problem is that they are affected by temperatures when it comes to charging, when you charge the batteries in places with high temperatures it can affect the battery causing it to have a short life span or even spoil it completely. Therefore, it is always advisable that you charge the battery in a room with low or cold temperatures like 4 to 38 °C. At lower temperatures below 4 °C charging is also inevitable but it is always advisable that you change the current used for charging and make it low before charging. However, in most lithium-ion batteries which are specially made for the selling market, one should not charge them in temperatures that go below 0 °C. When charging the battery in such temperatures one cannot see any side effects immediately, since the battery will just appear to be charging normally. However, after a period of time the metallic lithium in the battery can be electroplated when it reaches freezing point, and this can affect the battery, and the damage cannot be fixed even after recharging.

Materials and properties

Lithium, just like all other alkali metals can be said to have a monovalence electron which is easily shared in the chain to create a positively charged ion. Due to this lithium can be seen to be a very good conductor electricity, heat and other reactive objects, but even with all this features lithium remains to be the least reactive when compared to any other alkali metals. The reason lithium has low reactivity as compared to other alkali metals is because of the nearness of its valence electron to the nucleus. Even though lithium is classified as a metal, it is practically so soft it can be cut by a knife. After cutting lithium, the place that has been cut appears to have a shining silvery surface which changes color to gray due to exposure to oxygen. However, it possesses the highest and lowest boiling and melting points as compared to all other alkali metals, but when compared to metals it basically has the lowest boiling point on the list.

Lithium can be said to have very low density, which can be compared to that of oak wood. When compared to all other solid elements in the average temperature it is nearly the least dense of which it is followed by potassium that is like 50% denser than lithium. And also when compared to liquids leaving hydrogen and helium out it is also nearly the less dense. As if that does not make if dense enough lithium is too light it can float on water this making it one of the only metals with this ability.

 

Processing

For a battery to discharge, it all depends on the diffusion of lithium ions on the negative and positive electrodes through the primary collector. This process is based on diffusion through the electrolyte that leads into the cathode. In a situation where high current discharge and charge are being experienced diffusion is the biggest obstacle that can be faced. The reversible and irreversible insertions procedures come up with a volume change in the active electrode materials. There have been some major problems which companies have been facing when processing and manufacturing batteries, however, the efforts in processing and manufacturing the materials for performance improvements and to also avoid volume change has been leading to composite materials which are involved with nanoscaled particles.

Nanoparticles can contain volume change with the smallest chances of cracks appearing composites results in small diffusion path lengths through a slow diffusion stage. On manufacturing, the main focus is always on packing density to amplify active material content and electronic flow which assures charge displacement to collector. Batteries that are circular in shape are manufactured and packed as explained. First the electrolytes are created through paste of active material powders, additives, binders and solvents, after doing this they are taken to a machine to be poured on collector foils which are made of aluminum on the cathode and copper on the anode. Then when all that is done the manufacturers will have to start keeping track of any homogeneous thickness and particle size then afterward slowly cut them into the same width. After the samples are cut, they are then stacked together keeping the anodes apart from the cathodes, then after this they are inserted into cylindrical cases. When the cylinders are inserted, the battery is filled with electrolytes whose primary objective is to make the separator and the electrodes wet. When this process is done any insulators and safety devices are connected. After this, the batteries are charged for the first time just for them to have enough power to be tested. The first charge of a cell determines a lot on the cell performances, Cells life span and also the cycling behavior.

Applications

The use of batteries is more of a daily thing that at this age and time mankind cannot be able to live without. Most of the thing we do every day are connected to batteries in one way or another and in the end we will find out that even living at the comfort of your house without batteries can never be that comfortable. At our houses we find that all electronic devices we use as a mode of entertainment have batteries things like laptops, cameras, watches, etc. Lithium cells always find themselves ending in a long life deal with an electrical devices, this devices are equipped to use lithium-iodine cells that can have a life span of around 15years or more if properly used. However, for less useful objects like toys the lithium battery might end up staying for a long time than the toy, so that is why in such objects we find non expensive types of batteries being used and also which cannot last for so long on the object.

Instead of investing much money on alkaline batteries it is easier to use lithium batteries mostly in objects such as clocks and cameras, even though they are more costly but if price is not an issue then it is the best way to go. The main advantage of using lithium cells is that they will last long, therefore, cutting the cost of buying batteries now and then. However, everything which as an advantage must also have a disadvantage and for the lithium batteries, the users must give attention to the higher voltage that are present in lithium cells before using them as a drop in replacement in objects that are designed to operate on ordinary zinc batteries.

 

Conclusion

In comparison to all batteries it is evident that lithium-ion batteries have the best options when it comes to energy storage and also provide high power and energy for processes such as transportation due to the fact that they have strong electrochemical potential, energy density and theoretical capacity.

 

References

[1] manishi, Nobuyuki, Alan C. Luntz, and Peter G. Bruce. The Lithium Air Battery: Fundamentals. New York: Springer Science Busines Media, 2014. Print.

[2] Ashton Acton, Q. Anhydrides—Advances in Research and Application: 2013 Edition. N.p.: ScholarlyEditions, 2013. Print.

[3] Kazunori, Ozawa, ed. Lithium Ion Rechargeable Batteries: Materials, Technology, and New Applications. N.p.: John Wiley & Sons, 2012. Print.

[4] Smart, M. C., B. V. Ratnakaumar, and K. M. Abraham. Rechargeable Lithium and Lithium Ion Batteries, Issue 29. Ed. M. C. Smart. N.p.: Electrochemical Society, 2008. Print.

[5] Scrosati, Bruno, K. M. Abraham, Walter A. Van Schalkwijk, and JusefHassoun, eds. Lithium Batteries: Advanced Technologies and Applications. N.p.: John Wiley & Sons, 2013. Print.

[6] Doyle, M., E. Takeuchi, and K. M. Abraham. Rechargeable Lithium Batteries: Proceedings of the International Symposium. Pennington, NJ: Electrochemical Society, 2001. Print.

 

 

The Advantages of Having Your Articles Republished and How Exactly to Manage It

Article marketing is a very important component of many small affiliates’ marketing campaigns. I have written a great many articles discussing the advantages of writing quality, keyword rich and preselling articles and distributing them online. In this article I will go further and develop the idea of a correlation between article quality and republishing rate, and, furthermore, the way this distribution can be hugely beneficial for your internet business.

Part 1: The advantages of being republished

See your articles go viral

Republishing is an option available on most of the main article directories including EzineArticles, the site where this article is to first be published. Basically bloggers and website owners are free to take and use your article as long as the content and links are not edited at all. They receive the advantage of new and interesting content for their readers, and in exchange you receive an authentic enhancement of the benefits gained from each article submission. Your articles, and hence links, are spread around the internet and this can often occur in a viral process – especially for high quality articles, the invention of social bookmarking has made this phenomenon even more common. In the modern era of affiliate marketing therefore, your adverts and articles are judged more and more critically by readers – if you have a talent for writing and developed knowledge of your niche the advantage has shifted towards you and this is why you will hear a lot of professionals tell you ‘content is king’. It is reality that the best and most interesting articles can be read by thousands of people overnight – I have had this sensation occur a couple of times where my site has featured highly on Stumble Upon or Reddit, and even better, many of the resulting influx of visitors also made purchases.

Backlink benefits for SEO

Apart from the instant growth of your article readership there are also long term SEO benefits to be had from wide-scale republishing. The more related sites using your article, the more backlinks you will get, and hence clearly more is better in this case. Again, when more people see your website republished, the more sites will republish it themselves. Backlinks are such an important factor in influencing search engine rankings and therefore if you can extract as many backlinks as you can per article you can see big movements. This is the essence of my own marketing strategy and is certainly an approach which has brought me success. The re-publishers must also link back to the original articles, and therefore these too can move to the top of Google’s rankings (and other search engines) – if you can see your own pages and your original articles at the top of the search engines you will be getting a huge amount of highly targeted, organic traffic like I am in many niches.

Part 2: How to get your articles republished

Considering these backlink and targeted visitor benefits, it seems to make a great deal of sense to optimise your articles for republishing. Here is my advice on how exactly to do this – I have written more in depth articles in this issue but I feel this is a decent beginner’s guide.

Firstly we must consider why exactly website and blog owners choose certain articles to republish. Admittedly the rates of republishing will depend upon the size and specificity of the niche; however, there are a few general trends which you should try to capitalise upon.

Focus on quality and relevance

The first of these pieces of advice is to focus, as far as possible, on quality. This means investigating the relevance of what you are writing to your readers, and doing so before you start to write it. The more tailored to people’s needs the more likely they will visit your website and / or republish your article. I have, for instance, written long and well-structured articles only for them to only be useful to a tiny, tiny niche with little commercial value. You should try to avoid this trap of specificity if at all possible, especially when you start to run out of things to write. Spending a few minutes researching the Google keywords that people are searching for, as well as the more popular threads on forums, can help you identify these. For instance I was inspired to write this after reading a discussion on quality vs. quantity and used my answer as the basis for this piece.

Spend more time to ensure quality

While I’m on the matter of article quantity, I will also say that a quantity approach to article marketing is unlikely to bring the same republishing benefits. Over the life of an article this is to your disadvantage as poor quality articles are not going to continue to spread and build new backlinks – if you only get a couple of new backlinks each month this is still well worth the extra hour or two you may have to spend initially on creating the quality content. If you let the articles do the work for you it is far more rewarding, not to mention time saving, experience than if you need to just keep writing articles for backlink purposes. This means being sure to include everything you feel is relevant, and planning thoroughly is a good way to achieve this – also spell check and ensure the grammar has absolutely no mistake – people don’t want poorly written rubbish on their website.

Go into greater depth and be unique

The next piece of advice is to make your articles in depth and analytical. Really answer the reader’s questions and discuss any original insights you might have. It is just real common sense to say that a reader is far more likely to republish an article that is not only highly relevant, but also unique. You should be trying to compel anybody who reads to republish and therefore the usefulness must be top notch – this means being comprehensive in the coverage of topics involved. I experimented with this a lot and found that writing articles between 1000 and 2000 words were the most effective for republishing purpose. This is the length to maintain interest and to get across an impression of knowledge and expertise. Anything shorter and you probably haven’t included enough information, anything more and your readers may just switch off before getting to the end. There may be niches or specific articles which break this rule but in general a long and in depth article is the best way forward.

Tempt people to read

Finally I will say you need to, quite obviously, get people to read in the first place. This includes all the standard things such as an intriguing title, keyword rich content and building backlinks to your article. I have written a lot more information about this so please check my other articles for further detail.

If you can write to a high standard and follow my advice you should be able to start driving both your backlinks and targeted visitors through the roof. I hope you have found this particular article interesting and I don’t think it’s a bad example of my successful work – in the affiliate niche the republish rates average at nearly 10% of article views so you can imagine the success I’m enjoying as a result. This kind of success is possible for anyone to achieve, as long as you stay motivated and keep writing. And remember that just spending that little bit longer on each article can make a massive difference to your republishing results in the long run.