Monday, September 30, 2019

The Autobiography of Mahatma Gandhi

The autobiography of Mohandas Karamchand Gandhi, subtitled The Story of My Experiments With Truth, focuses on Gandhi's struggles for non-violence and civil disobedience through the acts of Satyagraha, literally meaning â€Å"holding firmly to truth. † In each of the chapters, he talks about instances in life in which he had struggled with Truth, considering Truth being the ultimate source of energy. The question many might ask is: how can one who is so skinny, one who had to live with a stick throughout his struggles, get such energy?It was because of his experiments and the trials that Gandhi developed dietetics, non-violence, hydropathy, naturopathy etc. After finishing his studies in England, he came to South Africa where he changed from a typical lawyer to one who was remarkable. It's more surprising that with the ideologies he produced from studying law, eastern and western philosophy, he kept them all by his side and followed them to the extreme. He was conservative to h is thoughts in any situation and in following them perhaps, to some, inflexibly so.One reason I become overwhelmed by Gandhi is his simplicity, wearing a single dhoti (an Indian clothing) and living solely by vegetables. Even when he was or his son was on his deathbed, he insisted that eating anything other than vegetables was wrong. He considered that through those necessities — in line with his teachings — it is possible that one can live freely. This means one can live without food or drink, without anger or desire, if they are to follow a simple code of behavior.This book thus teaches one in practical life on how to live without any of the material needs. ? – â€Å"I cannot attain freedom by a mechanical refusal to act, but only by intelligent action in a detached manner. This struggle resolves itself into an incessant crucifixion of the flesh so that the spirit may become entire free. † – â€Å"That freedom is attainable only through slow and painful stages. † – â€Å"A reformer cannot afford to have close intimacy with him whom he seeks to reform – â€Å"he who would be friends with God must remain alone, or make the whole world his friend. – on actively forgiving sin: â€Å"Man, as soon as he gets back his consciousness of right, is thankful to the Divine mercy for the escape. † – â€Å"When such Ahimsa [non-violence] becomes all-embracing, it transforms everything it touches. There is no limit to its power. † – On monastic changes to his life: â€Å"Let not the reader think that this living made my life by any means a dreary affair. On the contrary the change harmonized my inward and outward life. It was also more in keeping with the means of my family. My life was certainly more truthful and my soul knew no bounds of joy. † ?

Sunday, September 29, 2019

Concepts of Family Nursing Theory

Nurses hold a unique position among health care professionals in terms of prolonged proximity to patients during a stay in hospital or while a person with a long-term health problem is being cared for at home. In contemporary context it is necessary to address the needs of the families whose lives may be irrevocably changed by the illness of one member. As Friedman (1992:29) put it: â€Å"The psychosocial strains on a family with a member suffering a chronic or life-threatening condition can rival the physical strains on the patient.† However, it is not only in relation to chronic illness and disability that families may stand in need of help. The family developmental life cycle involves natural transitions which may create considerable stress. One example might be a woman trying to deal with an adolescent son who is engaging in risk-taking with drugs and alcohol, to protect her younger son from his brother’s influence, to persuade her busy husband to give more attention to his family while providing some support for her mother who is caring for an increasingly frail husband. There is potential for conflict in all of these relationships as family members attempt to balance their own needs with those of other members of the family, and of the family as a unit. Such family tensions are likely to influence the health and well-being of each family member, and their ability to deal with unanticipated events such as accidents or unemployment. Wherever families are struggling to maintain or restore equilibrium, to find ways of coping effectively with crisis or with long-term stress, nurses may find themselves in a supportive role. Frude (1990) identifies that in the literature on families some authors focus upon individuals and regard other members as being the social context of the person. Other authors look at the family unit as a whole with individual members as parts of the whole. This distinction is pertinent to discussions on family nursing. Currently nurses and their colleagues see it as both legitimate and important to take into account the family context of their patients or clients. Much more discussion and collaboration takes place with relatives than in the past. Nurses in some specialties, for instance community nursing, pediatrics or psychiatric nursing, might argue that because of the nature of their work they have always been concerned with the family of the particular client or patient. From the contemporary perspective, it is useful to be aware of how family composition is changing in order to have a mind to the wider context of society as a whole. It is possible to be under the impression that the family today is in terminal decline if all that one reads in the popular press is to be believed. A closer look behind the headlines reveals that what is understood to be under threat is the traditional two biological parent household with dependent children, the nuclear family. It is increasingly apparent that a growing minority of children will experience life in a family that is headed by a lone parent, usually the mother, before they reach adulthood. A popular misconception is that the majority of these mothers are single women. Their numbers are growing faster than other groups, the figures for which seem to have established at the end of the 1990s, but divorced, separated and widowed mothers still constitute the majority. In addition, the divorce rate in remarried couples remains higher than for the general population. There are many factors involved in this but the additional stresses of a reconstituted family may make them more vulnerable to breakdown, for instance the parent-child bond predating the marital bond can lead to step-parents competing with their children for primacy with their spouse. Dimmock (1992) notes that too often the blended family is cast in the mould or ideal of the nuclear family. Indeed, many of those involved are keen to view it in that light. Remarried families can often be struggling with unresolved emotional issues at the same time as coping with family transitions. Dimmock (1992) also indicates that society offers the choice of two conceptual models, that of the nuclear family or the wicked step-parent (mostly stepmothers) of fairy tales. The family nursing model allows accommodation of a family with less rigid boundaries. A nurse, perhaps in the role of health visitor, with an understanding of family systems and family nursing could provide valuable support and help for these families to work through some of the issues involved. There is another group of families which is becoming more prominent, particularly in the United States. Lesbian and gay parenting is currently a topic of hot interest as our society struggles to decide whether it will move forward on human rights issues or attempt to retrench and move back into a mythical past of â€Å"family values.† Increasingly in the US this is an area of interest and debate, especially as reproductive technologies have advanced so that it is possible for the lesbian woman to contemplate pregnancy without a male partner. Gay men wishing to raise a family are also becoming a focus for media interest and debate in this country. The impact of AIDS and HIV infection has also highlighted issues concerning next of kin with gay men, particularly within the health service and in legal terms. This demonstrates the appropriateness of accepting the notion that, from a nursing perspective, the family is who the individual identifies, although it may not necessarily conform to biological or legal ways of thinking. From the personal viewpoint, the strongest argument for the appropriateness of family nursing in the United States now is the massive shift of care from hospitals and institutions to the community. Patients in hospital are more acutely ill, with resultant stress for families who need support. In the community families are in the first line of caring for individuals with intractable, often severe, health problems. At the same time, the purpose of nursing is to provide care for those with continuous needs in partnership with people and with other organizations. Therefore, I totally agree with the purpose of family nursing described by Hanson (1987:8) which is to promote, maintain, and restore family health. Moreover, family nursing is concerned with the interactions between the family and society and among the family and individual family members. References Dimmock, B. (1992) A child of our own, Health Visitor, 65, 10:368-370 Friedman, M.M. (1992) Family nursing: Theory and practice, 3rd edn, Connecticut: Appleton & Lange. Frude, N. (1990) Understanding family problems: A psychological approach, Chichester: John Wiley & Sons Hanson, S.M.H. (1987) Family nursing and chronic illness, in Wright, L., Leahey, M. (1987) Families and chronic illness, Pennsylvania: Springhouse.

Saturday, September 28, 2019

Systems Engineering Management Case Study Example | Topics and Well Written Essays - 500 words - 108

Systems Engineering Management - Case Study Example The G-Soft was the riskiest function that the program could embrace. Precisely, this is because it had a poor managerial structure and duty stipulation making it ineffective. Additionally, the workers in the program were resentful because of their withdrawal from the previous workstation without prior knowledge. Consequently, their morale is low, resulting in poor team cohesion. The Techno State University has the most impact on the program as a whole (Case Study, 2015). Specifically, this is because the university professor had sufficient experience in developing devices for use in similar research. In addition, the Direct Broadcast mode used with the SASS program met the specification required in the transfer of data to support field measurements. In spite of that, the major shortcoming of the Techno University subcontractor was the high cost of the SASS function needed in the primary program. The ethical concern facing Jim is the rational of proceeding with the project amidst the threats facing the program and the lack of an adequate solution for the same. Specifically, this concerns the absence of both human and financial resources, as well as information to make sound decisions. Bob should approach the head of the Spacecraft IPT and discuss the importance of collaboration in executing the program projects. In doing so, he should inform him that the information required by his team is critical for the completion of other parts of the program (Case Study, 2015). In addition, he should emphasize that a delay in one of the projects can lead to a failure of the whole program. Furthermore, he should consult external officers such as the head of the program to intervene and find a permanent solution to the problem.

Friday, September 27, 2019

Organizational Behavior as one of the Essential Elements of Management Essay - 1

Organizational Behavior as one of the Essential Elements of Management - Essay Example With both its internal and external aspects, motivation is instrumental to employees’ choices, level of input and persistence in applying efforts to a particular activity towards success. The recommendations on developing motivational theories are based on the assumption that existing theories are less effective in motivating employees. The first recommendation is an application of results from existing analyses in developing a basis for new theories. This is because while existing theories may have weaknesses and limitations, they may hold some level of validity and identified convergence of theories from Meta-analyses establishes the ground for their application in developing new theories. Another recommendation is an elimination of virtual boundaries in organizations that create barriers to sharing of resources and information. The elimination of boundaries should further be achieved, on developing theories, by not restricting theories to activities or departments. Further, indicator variables of general and particular motivation types should be understood. An understanding of the involved motivation is also necessary together with identification of the role of dynamism in human behavior. The article, therefore, establishes a new approach to developing motivational theories for an effective application (Locke and Latham, 2004). Hendry, Woodward, Bradley, and Perkins also identify the need for a change in understanding aspects of â€Å"reward and performance† (n.d., p. 1). They establish a new approach to understanding performance, its aspects, measurability, and approach to its improvement. The authors, for example, identify inefficiency in the traditional accounting approach to measuring performance. They also identify dynamism in the corporate world that has demonstrated the necessity of measuring performance as well as core principles for measurement. The  article also associates performance with employees’ capacity, relationships between employees and supervisors and different rewards approach.

Thursday, September 26, 2019

Evaluating Fronties North Adventures Corporate Social Responsibility Research Paper

Evaluating Fronties North Adventures Corporate Social Responsibility Strategy - Research Paper Example This research will begin with the statement that Frontiers North Adventures is one of Canada’s most successful tourist's company. The company started in 1986 in northern Canada to provide Authentic Arctic Experiences. The company is a family owned, and it has over 30 years experience in Eco-tourism. The business’s clients have always felt satisfied with the arctic experience after using Frontier’s North Adventures. The company has collaborated with several international and national organizations to ensure that its commitment to sustainability and conservation policies adheres. In addition to this, a company has collaborated with Polar bear international to provide some of the best and breathtaking polar bear sightings while ensuring that the ecosystems in which this polar bear live in are not endangered. Frontiers North Adventures most famous adventurous tourist attractions include Northern Lights Viewing, Beluga Whale Watching, and Polar bear Experiences. The c ompany has successfully managed to operate in this tricky field for more than 30 years. The company has been named as one of the top three sustainable tourist operators in Canada because of its Corporate Social Responsibility. The company has been lauded nationally for its programs involving Sustainability. The company has also been recognized for its work and has won a number of awards including, SKAL International’s 2009 Ecotourism Award, Travel Manitoba’s Sustainable Tourism Award, and several other Ecotourism Awards both Nationally and Internationally. Frontiers North Adventures provides exciting wildlife packages in Northern Canada. The goal of Frontiers North is to deliver to their guests a value worth wildlife familiarity in a responsible and an ecologically friendly manner. The company not only inspires visitors to view and learn about local wildlife but to also learn about the antiquity and culture of the North Canadian society. Frontiers North Adventure is de voted to social, environmental, and ethical accountability in order to uphold the well-being of visitors, the local public, and the ecosystem in which they run.

International Business- Culture Essay Example | Topics and Well Written Essays - 2250 words

International Business- Culture - Essay Example For example; GlaxoSmithKline, a pharmaceutical giant has an intensive training program for managers and other employees in international assignments, which is better termed as AI (Wolper, 2004). The AI program has been strategically designed to ensure employees relocating to foreign countries are well aware of the new cultures; AI prepares them for any difficulties that would be encountered. The incentives are offered to encourage them to take the new assignments because; a research by Oudenhoven & De Boer (1995) observed that managers tend to have a stronger preference for doing business in countries with similar cultures to avoid such stresses of dealing with diverse cultures, which has been defined as a problem in current management of multinationals. The company has therefore to undergo an added cost to ensure such managers are motivated in taking their new assignments in such new environments. Language and aesthetics Language barriers in communication refer to the different lang uages and dialectics used in different countries and by different communities. ... er it refers to understanding the meaning of different symbolic behaviors by different people when communicating and how such symbolic meanings refer to which mostly is a challenge faced by managers in International business. Learning to communicate effectively and decoding symbolic communication in many cultures is necessary to enhance effective communication. Communication effectiveness depends on two aspects; high and low context communication (Schneider & Barsoux, 2002). High context communication implies that message will not require any background information, while in a low context communication , more information has to be given in the message for it to be effective (Schneider & Barsoux, 2002). Countries that exhibit more individualism as Hooker elaborates have low context communicant, while countries that exhibit collectivism have high context communication. In low context communication, the people will need signs and other images to remind them, or to communicate of a parti cular message, while in low context cultures, such is not required as individuals have values that enable them to assimilate such communication as a norm. Behavioral norms are more entrenched in such high context communication such that all are supposed to know and understand their norms to avoid breaking them and getting to the wrong side of the law. As explained earlier, Mexicans are a masculine society that finds more pleasure in saving face especially for a male employee. Direct confrontation with such people is therefore not appreciated; communication has to be in such a way that the individual will feel respected and save their face. Likewise Malays are a people who observe culture and exhibit high context communication. For example, using the left hand to give something or point

Wednesday, September 25, 2019

Description of My Room Essay Example | Topics and Well Written Essays - 750 words

Description of My Room - Essay Example One opening is the door which provides passage for entering the room from the TV lounge. Moving clockwise, next comes the wall in which there is no opening. Next to that is the wall with a large window located in the center. Next to that is the wall with another door that joins my room to the bathroom. Both doors are of the same size. Their size is 4 feet by 8 feet each, in which 4 feet is the width of the door and 8 feet is the height of the door. The size of the window is 6 feet by 5 feet, 6 feet being the width of the window and 5 feet is the height of the window. The floor of my room is all covered with ceramic tiles. There is marble skirting on every wall 5 inches in height starting from the finished floor level. The purpose of this skirting is to keep the walls protected against stains and marks that might possibly be left by shoes. The skirting also protects the walls from the watermarks that might be left while cleaning the room if the skirting is not applied. A prominent fea ture of the wall that has no opening in it is the fireplace. A heater is placed in the fireplace. The heater is connected with a gas pipe engraved in the wall. A chandelier hangs from the middle of the ceiling till 2 feet in the air. One tube light is located at a vertical distance of 6 feet from the floor on the wall between the bathroom and my room. A small bulb is fixed at a distance of 8 feet from the floor on the wall with no opening. An air conditioner has been fixed on top of the window to regulate the temperature and keep the room air-conditioned. Mauve silk curtains drop over the window from its top all the way down to the floor. The curtains do not have any print and are thus plain. A lilac frill covers about a foot of the curtains from the top. The window opens to the lawn in front of my room. I can see Oak trees and Mulberry bushes from inside my room.  

Monday, September 23, 2019

Report in MLA Style Essay Example | Topics and Well Written Essays - 1000 words

Report in MLA Style - Essay Example And the last element is resolution when the conflict is resolved in some way. The vivid example is given in the book â€Å"The Illustrated Mum† when the protagonist dares to speak to her mother about the problem. The genre of children’s literature is rather specific since it demands to be catching and at the same time easy to read and understand. Fist of all it must be about childhood or animals. The plot is to be simple and straightforward, the author expresses a child’s point of view, as a rule the stories tend to fantasy, repetitions, has a form of pastoral idyll, represents the world from the innocent viewpoint, is didactic, tries to balance the idyllic and the didactic. Genre - a type of literature in which all the members of one genre share common characteristics (Chapleau, p.24). Nancy Anderson, associate professor in the College of Education distinguishes seven genres of children’s literature: 1. Picture books, such as board, concept, pattern and wordless books (Chapleau, p.24). They are characterized by colorful pictures that are followed by small pieces of text. It makes the reading process for children interesting and fascinating. The example here is â€Å"The sleeping house†. 2.Traditional literature: myths, fables, ballads, folk music, legends, and tales. Traditional literature is characterized by the stable description of natural events and situations that are common in the whole world. Usually, such literature doesn’t have a definite author. The example here is "Hansel and Gretel" by Ian Wallace, "The Dragons Pearl" by Julie Lawson 2) There are several stages how to get a child interested in literacy: introduce a title and let children speculate what it is about; introduce some details of the plot, encourage the students to read a book in order to see whether their suggestions were right, organize the group discussion of the book in the

Sunday, September 22, 2019

Development of Heath Information Systems in Crete Case Study

Development of Heath Information Systems in Crete - Case Study Example Unfortunately, the system has received support from a few individuals as well as from a small number of both private and public health care providers. They few adopters of the NHS have installed some laboratory, administration and financial information system in their workstations. The private sector is the leading adopter of the new systems and networking in their various departments. The development of regional health information network in Crete has been commendable as compared to other regions in Greece. Crete has moved faster to enhance primary healthcare and embrace ICT in the integration of health care information in Greece. Therefore, it has been earmarked as a role model for other regions in the field of health care information integration as well as the adoption of ICT. Consequently, Crete has received support from various quarters to implement the regional health information network. The development in Crete has been attributed to Crete Tech.   Crete Tech is an ICT company established in 1984. The Crete Tech had a full-fledged research and development department that is fully equipped with personnel and equipment. Its vision is to integrate all healthcare service in Crete. Crete Tech has developed a strategy to roll out its services to the health providers. In 1997, it developed an objective of creating an integrated electronic health record whose purpose was to store and retrieve patients’ records in the seventeen primary health care centers in Crete. The system included the needs and interest of the general health practitioners who wanted to network in order to promote primary health care. Throughout the strategy, Crete Tech amassed enviable support from the general practitioners and moral support from government officials. However, it had no support from the Regional Health Authority (RHA).  

Saturday, September 21, 2019

Mutual Funds Essay Example for Free

Mutual Funds Essay A mutual is a kind of investment-company that combines money from many investors and backers and invests the money in bonds, money-market instruments, stocks, other securities and sometimes even cash. A mutual fund in basic terms is a large group of people who lump their money together for management companies to invest. And, like most things in the world, there are fees and commissions involved. Mutual funds are managed by money managers, who capitalize the fund’s capital and try to produce capital gains and revenue for the fund’s investors. A mutual fund’s portfolio is organized and maintained to mimic the investment objectives defined in its catalogue. A mutual fund has many characteristics, which are listed below. Investors and backers purchase shares in the mutual fund from within the fund, or through a broker or fund agent, and cannot buy the shares from other backers on a secondary market such as the NASDAQ stock market or New York Stock Exchange. The amount that investors purchase their mutual funds shares for is the estimated net asset value or NAV per share in addition to any fees that the fund may charge at the time of purpose, such as sales charges, also known as sales loads. Mutual fund shares are convertible, meaning when an investor wants to sale their shares, they sell them back to the mutual fund or to a broker working for the fund at the net asset value less any fees the mutual fund may charge, such as deferred sales loads or reclamation fees. Mutual funds commonly sell their shares on a continuous basis, although some funds will stop selling when, for instance, they reach a certain level of assets under management. The investment portfolio of a mutual fund is typically managed by separate entities known as investment advisors that are registered with the SEC. Furthermore mutual funds themselves are registered with the SEC and subject to SEC regulation. There are many forms of mutual funds, which include index funds, stock funds, bond funds, and money market funds. Each type of mutual fund has a different investment objective, strategy and investment portfolio. Different mutual funds are also subject to different risks, volatility, and fees and expenses. Fees related to a mutual fund reduce returns on fund investments and are an important feature that investors should consider when buying mutual fund shares. Mutual funds come in two main types, categorized by how the fees are charged. The types are load mutual funds and no-load mutual funds. A load mutual fund charges for the shares/units purchased plus an initial transactions fee. The initial transaction fee is typically no more than 9% of the investment fund amount or can also be a standard fee contingent on the mutual fund provider. This fee is added to your purchase as a sales fee. There are a couple different types of load funds out there. Back-end loads mean the fee is charged when you redeem the mutual fund. A front-end load is the opposite of a back-end load and means the fee is charged up front. A no-load fund means investors and backers can buy and redeem the mutual fund units/shares whenever without a commission or sales charge. Some companies such as banks and broker-dealers may charge fees and commissions for the transaction and exchange of mutual funds. Many no-load funds charge a fee if you redeem them early. Most people endorse avoiding load funds altogether and studies have shown that load mutual funds and no load mutual funds offer the same return, however, one charges a commission fee. A 12B-1 fee is the yearly marketing or sharing fee on a mutual fund. The 12B-1 fee is treated as an operational expense and is incorporated in the fund’s expense ratio. The 12B-1 is usually between .25% 1% of a fund’s net assets. The name of the fee comes from a segment of the Investment Company Act of 1940. An electronically traded fund or ETF is a security that follows an index, group of assets or commodity, but trades them like a stock on an exchange. Prices for ETFs change throughout the day when they are bought and sold. Because ETFs are traded like stock, they do not have NAVs calculated everyday. References 1. U.S. Securities and Exchange Commission Information on Mutual Funds. U.S. Securities and Exchange Commission (SEC). Retrieved 2011-04-06. 2. Fink, Matthew P. (2008). The Rise of Mutual Funds. Oxford University Press. p. 9.

Friday, September 20, 2019

How Organizations Ensure Job Satisfaction

How Organizations Ensure Job Satisfaction INTRODUCTION The world has been changing into the global village quite rapidly since the beginning of 21st century. Gone are the days of dark ages when employers could exploit their workers by receiving maximum output in exchange with no or awfully minimal rewards or incentives. In todays world, due to improved communication networks, one cannot keep others in dark about their rights and organizations have to fulfill their responsibilities according to the global standards. Similarly, the world of internet has enabled people to link themselves with others through websites. This new reality is working towards the objective of creating new sociological arrangements within the context ofÂÂ  culture, and same is the case with corporate culture. After realizing the force of competitiveness in global markets and between individual organizations, it has become really essential for any organization to make certain that it develops and keep holding a kind of personnel that is dedicated and faithful w ith the organization for an unlimited time. The workers or employees who are happy and satisfied with the work that they are assigned to do, or by the culture of the organization regarding relations with their employees ultimately feel motivated to continue their relationship with that organization as a faithful, devoted, committed and talented workforce. But many theorists feel that a great number of employees do not have this level of job satisfaction that they can be taken as motivated towards achieving the goals of the organization. Because of this unsatisfied nature of the employees, they keep seeking for alternate job resources where they may be able to experience a higher degree of job satisfaction. A high degree of job satisfaction shows high retention rate and low turnover rate. In other words, turn over rate can be taken as a measure of Job satisfaction level of the employees in any organization. The organizations that fail to retain their able and talented workforce and cannot make them loyal to the organizational g oals face problems in escalating their production level and profitability. Finck, Timmers and Mennes (1998) highlighted the problem that the business excellence can be achieved only when employees are excited by what they do, i.e. the employees should be satisfied with their work and job conditions in order to achieve high goals of an organization. Employee motivation and its link to job satisfaction of employees has been a matter of study for ages. Managers have to rely on their human resources to get things done and therefore need to know what factors would be most helpful for them in having a workforce that has a high level of job satisfaction. Making employees motivated is considered as a factor that has a power of making workers satisfied with their jobs. But this is an understood fact that one cannot directly motivate others; one can just create the conditions where people feel motivated themselves. Spector (2003) says that a number of factors can help in motivating people at work, some of which are tangible, such as money, and some of which are intangible, such as a sense of achievement. The accomplishment of any organization greatly depends on the contribution of its labor force. This is also said that such contributions are triggered by those features of peoples work environment that motivate them to devote more material and intellectual vigor into their work. In this way the organizations objectives are chased and accomplished. Motivation and job satisfaction are therefore regarded as key determinants of organizational success, both of which have an inter link between themselves. In order to have a highly productive and loyal workforce, organizations strive to take measures that would create a feeling of satisfaction and well being in their workers. But does it really matter, or is it only a common myth that the aspect of motivation does have an influence on the job satisfaction level of the employees. The aim of this study is to observe the relationship between motivation and job satisfaction of employees and to authenticate it through statistical measures. 1.2 Concepts of Employee Motivation and Job Satisfaction Definitions of employee motivation: The term motivation is derived from the Latin word movere, which means to move (Baron, Henley, McGibbon McCarthy, 2002). This means that motivation is a kind of energy that helps people in advancing towards the achievement of some certain goals. A great number of researchers over the years have been studying the concept of motivation and have been trying to extract the true definition of motivation but motivation can not be defined in explicit manner. Rather, motivation can be taken as a phenomenon or a concept instead of a simple remark. Campbell and Pritchard (1976) defined motivation as a label for the determinants of the choice to begin effort on a certain task, the choice to expend a certain amount of effort, and the choice to persist in expending effort over a period of time. Therefore, motivation is considered as an individuals behavior which is the result of some inter-related factors where some variables have to be taken as constants such as individuals skills, abilities and knowledge. There are a lot of perspectives about motivation; some of them are given below. Beck (1983) stated that four basic philosophies trigger a variety of angles about motivation on workplace. According to him, a man can be about his economic conditions, he wants to involve in more social activities and strong social relations, he wants to satisfy his need of self-actualization, or he may be a mixture of all the above mentioned needs. Theories that are about the rational economic man assume only the power of economic conditions on the overall behavior of a man. These theories assume that men are rational and they may make a right decision for their economic well being. The organizations that emphasize on the extrinsic rewards for their employees for example pay raise or fringe benefits actually follow this school of thought that man is rational about his economic conditions. Second kind of theories assume that the basic need of a man is only being social, these theories assume that man is mainly motivated by his social needs such as making friends and having good relationship with their colleagues. In this case, organizations want to make a more conducive and happy environment where their employees are satisfied with the people around them and where they can maintain good inter-relationship with the people t their workplace. Third perspective of motivation, according to theorists, is that a mans basic need is self actualization. It says that people can be motivated through intrinsic measures as they get pleasure in making good job and receiving compliments in response to a good job. That is, people derive satisfaction through their accomplishments. Organizations that believe in this approach may make a system where rewards are based on high performance. Lastly, the complex man approach argues that there is a much more complex system about motivation of people and this can be based on many factors such as emotions, motives, abilities and experiences. These factors may change their places on the scale from high to low or from low to high level from time to time. The changes in these levels are because of newly learnt behaviours of people as time passes. All of these above mentioned perspectives of motivation have triggered the researchers and theorists to present a number of different definitions about motivation. According to Schultz and Schultz (1998), motivation can be regarded as only the characteristics of people at workplace or personal characteristics of people that may explain the behaviour of people on their job. Some authors are of the view that intrinsic conditions are more powerful than the work related characteristics of a person. Spector (2003) regarded motivation as inner state of mind of a person that persuades him to involve in some particular kind of behaviours. Spector argued that motivation may be studied from two perspectives. One perspective, according to him is that motivation is the direction for behaviour to develop that people choose from a number of behaviours. The intensity of such behaviour can differ with the amount of effort that is required to be put in a task to accomplish. The second perspective is that an individual gets motivated by the desire to attain some particular goals. This motivation is derived from a persons individual needs and desires. Petri (1996) also stated that motivation can be taken as a force that acts on an individual to start and take initiative in showing some special behaviour. This theory explains that why it happens that some behaviour is mor e intense than others in particular situations, but not in others. The definition of motivation according to Gouws (1995) is that motivation originates from within an individuals own self, either consciously or unconsciously, to fulfill a given task with success because the person takes pleasure in fulfilling this particular job, rewards from others are not important for such kind of individuals who are motivated intrinsically. Beach (1980) regarded motivation as a readiness to use up energy to achieve a target or incentive. According to him, behaviours tend to be repeated when they are rewarded by others, but the behaviours that are not properly rewarded or are punished will tend to die with the passage of time. He, however, recognized that intrinsic motivation has a link with the job content and it comes in light when people are satisfied by performing some activity or just by involving in some kind of activity. Van Niekerk (1987) regarded motivation at workplace as created by the workplace environment and conditions that exert an influence on workers to perform some kind of activity by their own wish. According to him, workers want to reach some specific goals to have an inner satisfaction and to satisfy their own needs. Pinder (1998) gave his idea by keeping in mind the work place of organizations. He explained work motivation as a set of internal and external forces that help in initiating behaviours that are work related. According to the definition of Pinder (1980), work motivation has features that are invisible, and they are created from a persons inner self and that researchers therefore must rely on the theories that are already established in order to have some guidance in measuring work motivation. For the purpose of this particular study, employee motivation is taken as an instinctive force, that is maintained and shaped by a set of personal characteristics as well as workforce characteristics, that depend on the particular needs and motives of the workers. As it is already mentioned above, the concept of motivation is of very high importance with regard to the effectiveness of an organization, as many researches show that motivation creates a link between job satisfaction and job performance of the employees, and job performance is the determinant of profitability and success of the organization. So, in order to make their employees optimally motivated, it is necessary for an organization to focus on the factors in job content that result in employee motivation and job satisfaction. It is quite necessary for the managers and leaders to have a good knowledge about different motivational theories in order to have an effective management. Managers and leaders would need to choose the right theory to motivate a particular person in a particular situation and therefore have higher-performing and more satisfied employees. Here we are going to discuss different theories of motivation and a critical view of these theories. These motivation theories are categorized as: Need Theories of Motivation, Cognitive Theories of Motivation, and Reinforcement Theory of Motivation. THEORIES OF MOTIVATION Motivation can be regarded as a widely researched concept in the field of management and behaviour sciences. The concept of motivation is drawn on a broad spectrum as it is based on a variety of perspectives. But all of these perspectives have not been of same influence as they had once they were presented by theorists. One example of less influential perspectives is Maslows Hierarchy of Needs theory (Wicker Wiehe, 1999). But their contribution in this regard cannot be neglected and denied as the basis of motivation theories have originated from these perspectives. Motivation theories are generally categorized into three classes; these classes are named as, Need theories of motivation, cognitive theories of motivation, and reinforcement theories of motivation. (Baron et al., 2002). Needs Based Theories of Motivation Need theories of motivation are also named as content theories as they explain the substance of motivation (Hadebe, 2001). These theories propose that internal states of mind of individuals invigorate and express their behaviours. Maslows hierarchy of needs theory Abraham Maslows theory of hierarchy of needs is considered as most common theory in the field of motivation research (Van Niekerk, 1987). It as introduced by Abraham Maslow in 1943. The basic principle of the theory is that people get motivated by their urge to fulfill their needs, or shortcomings. These needs may be grouped in five categories. This theory also argues that all these needs come in hierarchical shape where lower order needs have to be satisfied first before going to the higher order needs (Gouws, 1995). Maslow (1968) emphasized that gratification of one basic need opens consciousness to domination by another. These needs are numbered below going from lower level to higher level needs. Physiological needs, Safety needs, Social needs, Egotistical needs, and Self-actualization needs Physiological needs are the basic needs of a man necessary for his survival, e.g. hunger or thirst. Safety needs do not only mean that a person wants physical safety and security of life. Rather it also means personal security such as a safe and secure job life without any tension. Social needs are referred to as a wish to have friends and family from which a person derives internal pleasure and love. Whereas egotistical needs are based on a persons desire to have a respectable and familiar personality in his society. Self-actualization need is the top most need in the hierarchy of needs as it stands for a persons motivation towards the full growth of his prospective personality, which is basically never totally achieved (Gouws, 1995). Existence-Relatedness-Growth (ERG) theory The theory presented by Alderfer is in fact an expansion of Abraham Maslows theory of hierarchy of needs. Alderfer presented the argument that human needs are not based on hierarchical level, rather they reside on a continuum (Spector, 2003). Alderfer reduced Maslows five needs into only three needs, which he termed as Existence, Relatedness and Growth hence termed as ERG theory. Existence is basically the need of a human being to survive physically from hunger and fear, Relatedness need is attached with the social needs of a man and Growth is basically the need of a person to grow personally and develop his or her personality. Alderfer put emphasis on the argument that as these needs occur on a continuum, all these needs can be experienced at a time. (Alderfer, 1969). Regardless of the fact that Maslows hierarchy of needs theory gathered very less support from empirical data, his theory had a positive effect on the policies of organizations as now managers policies could be more focused on the basic needs of employees. Also the highest level need in the hierarchy that is self-actualization need has been accepted by executives and managers who are now considering it as a compelling motivator (Schultz Schultz, 1998). Herzbergs two-factor theory Frederick Herzbergs Two-Factor theory is a well known theory in the study of motivation concept. Herzberg developed this theory in 1954 while he was studying the behaviours of the workers towards their jobs (Gouws, 1995). In fact, Herzberg wanted to study the behaviour of workers in order to judge their job satisfaction measures, but over the time this study got its reputation as motivation theory due to its motivational factors (Baron et al., 2002). Beach (1980) gave his opinion this theory represents aspects that are related to motivation at work place rather than general human motivation factors. The hygiene factors may be associated with lower order needs in the Maslows hierarchy of needs. These hygiene factors are placed on a continuum from the factors which cause dissatisfaction going towards the factors which cause no dissatisfaction. The point to be noted here is that the here no dissatisfaction does not mean satisfaction, as these factor involve such kind of circumstances that help in preventing dissatisfaction but they do not lead to job satisfaction. Some examples of these hygiene factors include the job status of employees, level of supervision, work conditions, pay and benefits and interpersonal relationships (Herzberg, 1966). Motivators are the factors that produce satisfaction in the employees and the absence of these factors would result in no satisfaction rather than dissatisfaction. The presence of these factors has a positive impact on the employee performance and job productivity. These factors may be associated with Maslows higher order needs in hierarchy but they are placed on a continuum from the factors which are highly motivated to the factors that are highly unmotivated. Job contents such as pleasure of performance, recognition level, opportunities of advancement and promotion are included in motivator factors (Herzberg, 1966). This theory has shown a great impact on the organizational psychology as now organizations are giving their employees a greater opportunity to plan and perform their own job descriptions (Baron et al., 2002). The two-factor theory has been very effective in the sense that now employees get the work that is pleasurable and meaningful for them (Spector, 2003). McGregors Theory X and Theory Y Theory X and Theory Y of Douglas McGregor (1960) correspond to an expansion of his thoughts on motivation to the course and organization of employees in the workplace. McGregors theory X postulates that people do not take interest in their work and try to get rid of making any effort to accomplish the task, so they have to be coerced and pressurized by some strict actions so that they perform up to the desired level. In his theory, the common man is believed to be a highly unmotivated person and lacks the sense of responsibility. He only strives to meet his lower order needs. They are selfish, and do not consider and care about organizational goals. In contrast of theory X, theory Y has a more modern approach to motivation. . it postulates that people seem to be highly motivated toward achievement of organizational goals, they are keen to discipline themselves, they are eager to take up responsibility, and are talented enough to create solutions for problems. McGregor then regarded T heory Y as a more truthful and rational description of human behaviour and attitudes, since it represents the incorporation of individual and organizational goals. However, McGregor acknowledged the fact that the theory does not propose a complete clarification for employee motivation (McGregor, 1960). McClellands learned needs theory McClellands theory is also referred as three needs theory. McClelland argues that the people who are achievement oriented strive to meet their three needs that are: the need for power (nPow), the need for affiliation (nAff), and the need for achievement (nAch). nPow denotes that people strive for a control over others, they want to influence others behaviour and be responsible for their behaviour. The nAff refers to the desire to create and uphold enjoyable relations with other around them. The nAch is the need to compete with others and to succeed in achieving goals et by the individuals themselves. According to McClelland these needs are not instinctive, but these are obtained through experience and learning (McClelland, 1987). Cognitive Theories: Cognitive theories present motivation as a process of cognition or inner thoughts, values and beliefs which are used by people when they want to make some choice regarding their behaviour at work (Schultz Schultz, 1998). Equity theory Equity theory was first introduced by Stacy Adams in 1965. Its basic principle is that individuals are motivated to attain a state of equity and fairness in their connections with other people, and with the organizations that they are working for (Adams, 1965). People make judgments or comparisons between their own and their companions or competitors inputs at workplace, e.g. their experience, qualifications, efforts and the outcomes that they receive as a result e.g. fringe benefits and pay, working conditions and status at job. Then they allocate weights to these effort and outcomes according to their significance and magnitude to themselves. The summed total of these efforts and outcomes creates an input/output ratio. This input/output ratio is the key factor in terms of motivation. A state of equity means that the output/input ratios of a person are equal to the ratio of others. If the inequity exists in this ratio, the person wants to change it by reducing one factor i.e. effort or enhancing the other one i.e. outcome. Apparent state of inequity by the person is consequently the foundation for motivation (Baron et al., 2002). This theory helped in providing the foundation to study the motivational repercussions of apparent injustice and biasness in the place of work. It also put down the basis for more fresh theories on justice (how job requirements and rewards are rewards are determined) (Cropanzano Folger, 1996). Goal-setting theory Edwin Locke proposed Goal-setting theory in 1968 (Beck, 1983). Spector (2003) portrayed this viewpoint on motivation as the theory that the internal intentions of people motivate their behaviours; it can be explained by the fact that that the behaviours are established by people needs to achieve a certain goal. Locke and Henne (1986) explained that behaviours are affected by goals in four ways. According to them, individuals are concerned with the behaviours that they believe would result in achievement of some particular goal; they assemble effort to reach the goal; they add up to the persons diligence which results in spending more time on the behaviours that are necessary to reach the preferred goal; they inspire the persons quest for successful policies for goal attainment. The prerequisites for goals before setting them are; they should be specific, challenging, attainable, need commitment, need regular feedback, and self-set by the individual. Only then individuals get motivated by the goals. Expectancy theory Vroom presented his expectancy theory in 1967 in which he argued that peoples behaviour is based on their expectations and beliefs about future events, which are extremely important and beneficial to them (Baron et al., 2002). Basically, the theory clarifies importance of rewards in establishing the behaviours of individuals. This theory is focused on internal cognitive conditions that go ahead towards motivation. It can be stated as, people are motivated to do some task only when they are sure that a certain task will lead to sme kind of rewards that are beneficial to them. The cognitive states given in expectancy theory are named as expectancy, valence and instrumentality (Spector, 2003). Expectancy means that the individual is expecting that he has the ability to perform the behaviour that is required to lead to a most wanted outcome, e.g. working hard to achieve a promotion in future. Valence stands for the value that is given to an outcome by the individual. An individual wants to know how attractive an outcome of a certain task would be for him. Instrumentality is the term used for a perceived probability of an individual that a certain behaviour will guide to the preferred outcome. Since its introduction, expectancy theory stands for a well known and important approach, but at the same time it has been disapproved on the basis that the assumption about individuals rational and calculating behaviour in their decision making process is not true in all senses. Another criticism for this theory is that fail to take into account the limited cognitive skills of individuals (Baron et al., 2002). Reinforcement Theories Reinforcement theories assume that the behaviour of people at workplace is mainly established by its apparent encouraging or harmful consequences (Baron et al., 2002). The reinforcement theories are based on the idea presented in Law of Effect. This idea was developed by Hull (1943). Hull presented his Drive theory in which he suggested that effort has a direct relationship with drive multiplied by habit. Where habit is a resultant of reinforcement of behaviour. The rewards for behaviour can be tangible, for example money and pay raise, or intangible, for example admiration of a certain behaviour (Spector, 2003). As a result, reinforcement theory has been taken as highly significant in setting up the ideas relating to rewards and monetary incentives as well as appreciation techniques. These reinforcement techniques have been practiced in many organizations now days (Schultz Schultz, 1998). Reinforcement theory is considered as out of track from other motivation theories as it does not take into account the basic factors or need for which a person wants rewards. It only takes into account the relationship between reinforcement and behaviours of employees at workplace. But its importance cannot be denied as the research on this topic has shown empirical evidence that rewards can be highly influential in the improvement of job performance (Spector, 2003). All these theories which are discussed above have added considerably towards different current viewpoints on motivation and appreciating the concept of motivation in the workplace. Undoubtedly, the theories of all the researchers and authors over the years have an impact on organizations ability to change their organizational psychology by taking effective and practical measures in order to meet the challenge of making their employees motivated and satisfied with their jobs to enhance productivity and profitability. JOB SATISFACTION The concept of job satisfaction attracts great attention by the researchers and theorists and also by the organizations these days. As its importance and popularity has been established in organizational productivity. Managers are now feeling more responsible about keeping their employees at a satisfied stage because their job satisfaction has a prime effect on the productivity of the organization (Arnold Feldman, 1986). Organizations are aware of the fact that having personnel that derive satisfaction from their work add massively towards organizational efficiency and definitive survival. Concept with such marvelous effect on organizational and personal life clearly justifies a matching amount of awareness. Definitions of Job Satisfaction Many definitions of the job satisfaction concept have been given over the time. Arnold and Feldman (1986, p.87) defined job satisfaction as the sum total of overall effect that people have towards their job. Therefore, high level of job satisfaction means that a person generally likes his work and appreciates to do so. He has a positive stance about it. McCormick and Ilgens (1980) regarded job satisfaction as a individuals approach towards his job. They added that a feeling is an exciting answer to the job, which may differ from positive to negative along a continuum. Beck (1983) further added that since a job has many unique angles, job satisfaction is essentially a summary of employee attitudes concerning all these. Theories on Job Satisfaction Beck (1983) said that theories involving the concept of job satisfaction have emotional, motivational and informational workings, as do other approaches about this concept. As we have discussed in detail these theories in the section about motivational theories, only a short summary of these theories is sufficient. Equity theory specifies that people generally want to receive what they consider a fair or equitable return for their efforts at work. Greater satisfaction is experienced if they perceive the return or reward they receive as equitable. Aim of the Study / Research Motivation As the topic of this study suggests, the basic aim of the study is concerned specifically to investigate the relationship between measures taken by the organization to motivate employees and their overall impact on the job satisfaction level of the employees. The organizations need to have a smooth line of production and business functions on a consistent basis in order to be able to perform up to the mark in accord with international standards. For this purpose they have to collect, manage and retain proficient, well trained and optimally productive personnel. The personnel of an organization play an important role in higher production and profit making but the condition is that they should be highly dedicated, devoted and faithful to the objectives of that organization. But the staff can have these characteristics only when they are satisfied with the work that they do and who are consequently motivated to continue their relationship with the organization. A systematic understanding of the nature and considerable causes of employee satisfaction and motivation, will facilitate employers in making the strategies to effect the required positive changes in motivation programs of their organization and ultimately to implement these programs to step forward towards optimal employee reliability and retention. Examples of such strategies may include selecting a number of intrinsic and extrinsic rewards to boost employee motivation and to get rid of certain of its human resource policies and practices that can slow down the process of employee motivation and their satisfaction level. A huge number of researches have been conducted on employee motivation, job satisfaction and their relationship with each other, as well as on a variety of combinations thereof. After having a thorough and deep examination of historical studies, the researcher of this study became able to produce a problem statement that is related to employee motivation and job satisfaction. In this regard, this study aims to add to the already existing knowledge about motivation and job satisfaction and the implications of these terms in organizational psychology. 1.4 Problem Statement Through a deep examination of historical studies, and after a thorough research on the existing literature, the researcher of this study came to know about a strong impact of motivation policies of the companies on the job satisfaction levels of their employees. There are also many studies that show the relationship of these two aspects with many other features in an organizational culture. According to Watson (1994) business in the contemporary era has realized that motivated and satisfied personnel will show an increased production level and deliver output powerfully even to the bottom line. Schofield (1998) conducted a convincing study in which he showed with certainty that the way people are managed has a powerful impact on both productivity and profitability levels of the organization. This study established the importance of job satisfaction, employee motivation and commitment, and corporate culture in organisational capability and limits. By keeping in mind the existing literature about these two variables that are motivation and j

Thursday, September 19, 2019

Rap :: essays research papers

Tha Century / 100 Bars Deep Now This's Gonna Be Sticky.... I shapeshift monotonous mockeries into a metamorphisis of melodic monogamy... Im more morbid audibly, smear your extremities with catatonic embalment fluid.. Smoke you for the toxin release! My words constrict airholes until all oxygen is ceased... Kids is tryin to elevate they point of views by studying topography?! Ha! You god-awful emcees.... Watch true suns set across the horizon of your premises... I shadowbox with the reflection of an extra-terrestrial nemesis, to sharpen my depth perception! Intense ressurections of mental sections, to ascend beyond eleven tenths of perfection... I was born when the clock was confused and twelve fell into thirteen... From dusk to dawn my embryo's vitality radiated a pulsing kinetic energy... I disperse beams! 360 degrees of devastation, and six degrees of seperation.. Equals 60 emcees thats gon die from each gamma ray salivation... I still see 20/20 with a cycloptic chromosome, so all mimes manipulated by psionic overtones.. Are overthrown from the underworld overture, over your vocal tone... Undulation, running flows over oval opal stones! Spitting sinister cyclones! If your real or not, its your plot, life behind a twisted doorlock...Amongst wizardous warlocks! Wither in sweltering weather... Swelling cerebellums in cellars, swirling in pools of clorox! Potions pour from my incisors, and inject adrenalin inside words.. In sin curves and blind blurs, reminders of pioneers and rectangularly erected pine boards... The riddle was solved whence it was exposed for its awfulness... I dreamed of an eon long apocolypse, only to wake up and find i was revolving in it... Once i shed my body, its residue will vaporize into cumulonimbus stormclouds... While i study obelist physics, and calculate diabolical arithmetics... Im sicker then cancer victims spittin up tumorous appendiges, then lighting a cigarette.. My aesthetics are acrobatic, the accepted eclectic with savage epileptic habits.. I feed your asses mass laxatives, as to extract gastric acids when the gas passes... Flash flasks of the nastiest wrath, worse then moldy thermoses of birth water contaminate.. Splash that in your eyes and laugh as your sinus collapse, and the virus attacks rampant.. Half of yall are clowns, spiritually vacation bound.. Likely contestants for the neighborhood talent show consolation round.. I put headphones in penetentiaries the way i spit these bars.. Battle? im the head blitzkrieg czar.. I diss emcees hard, thats why bitches be sparse... I slaughter in psychotic spasms like a vicious retard... Visually unscarred... Everytime i kill a victim my ammunition is re-charged... Im rippin seams apart... You couldnt find a rhythm in your weak heart.. OMNI hoe, we reach stars... I was born with my ambillical attached to the sun, and energy has granted me a tongue.. I turn tornadoes twisting 180 degrees from their regular rotation.

Wednesday, September 18, 2019

Socrates was a Wise and Harmless Man :: Argumentative Persuasive Argument Essays

Socrates and the Apology Some of the best sources of information about Socrates' philosophical views are the early dialogues of his student Plato, who tried to provide a faithful picture of the methods and teachings of the great master. The Apology is one of the many-recorded dialogues about Socrates. It is about how Socrates was arrested and charged with corrupting the youth, believing in no god(s) (Atheism) and for being a Sophist. He attended his trial and put up a good argument. I believe that Socrates was wrongfully accused and should not have been sentenced to death. Within the duration of this document, I will be discussing the charges laid against Socrates and how he attempted to refute the charges. One of the reasons why Socrates was arrested was because he was being accused of corrupting the minds of the students he taught. I personally feel that it is almost impossible for one person to corrupt the thoughts and feelings of a whole group of people. Improvement comes form a minority and corruption comes from the majority. Socrates is one man (minority) therefore it is less likely the youth have been corrupted by Socrates than by some larger group of people (educators, council members, jurymen etc...). Socrates was also put on trial for being an Atheist. In the argument Socrates has with Meletus, Socrates gets Meletus to admit that Socrates is Atheist and theist. Considering that both of these practices are totally incompatible, and Meletus admits to both of theses, maybe Meletus does not really understand what he is accusing Socrates of. I understand that back then; not believing in religion was considered a crime but to actually sentence someone to death for being different is totally uncalled for. Thirdly, because Socrates practiced making weak arguments strong (Sophist). Socrates was a traveling teacher and talked and challenged everyone he met. Socrates taught the art of persuasive speaking. He did not charge people money like most of the other Sophists did, but he did have similar beliefs as the Sophists. Sophists thought that our minds are cut off from reality and that we are stuck in our own opinions of what the world was like. Socrates believed that reason or nature could not tell us why the world is the way it appears. The Sophists' point of view is best summed up as this: we can never step out of the way things appear.

Tuesday, September 17, 2019

COP 3530, Discrete Data Structures and Algorithms, Summer 1999, Homework 1 :: UFL Florida Computer Programming Homework

Class Notes: Data Structures and Algorithms Summer-C Semester 1999 - M WRF 2nd Period CSE/E119, Section 7344 Homework #1 -- Solutions (in blue type) Note: There have been many questions about this homework assignment. Thus, clarifications are posted below in red type. When you answer these questions, bear in mind that each one only counts four points out of 1000 total points for the course. Thus, each one should have a concise answer. No need to write a dissertation. * Question 1. Suppose you want to find the maximum of a sequence or vector a of n distinct integers. Write an algorithm to do this in O(n) time, for any sequence of n distinct integers. max = very large negative number input(a) for i = 1 to n do if a[i] > max then max = a[i] endfor output(max) * Question 2. You could assume that you know the maximum value of a before you search for it. That is, if a has values in the interval [0,101], then the maximum would be 101. The best case (least work) in the preceding algorithm would occur when the maximum of the n-element sequence is the first element of the sequence. Where is the maximum located for the (a) worst case, and (b) average case? Support each answer with a proof, not just an example. Alternatively, you could assume that the maximum was not known beforehand, and a)-b), above might be easier...Either assumption is o.k. o Case 1: Maximum unknown a priori -- You have to search through the entire array to find the maximum. Thus, there is no worst case or best case if you consider the work as comparisons (dominant cost) only. o Case 2: Maximum known a priori -- This becomes a linear search problem (find the maximum).

Achieving Fault-Tolerance in Operating System Essay

Introduction Fault-tolerant computing is the art and science of building computing systems that continue to operate satisfactorily in the presence of faults. A fault-tolerant system may be able to tolerate one or more fault-types including – i) transient, intermittent or permanent hardware faults, ii) software and hardware design errors, iii) operator errors, or iv) externally induced upsets or physical damage. An extensive methodology has been developed in this field over the past thirty years, and a number of fault-tolerant machines have been developed – most dealing with random hardware faults, while a smaller number deal with software, design and operator faults to varying degrees. A large amount of supporting research has been reported. Fault tolerance and dependable systems research covers a wide spectrum of applications ranging across embedded real-time systems, commercial transaction systems, transportation systems, and military/space systems – to name a few. The supporting research includes system architecture, design techniques, coding theory, testing, validation, proof of correctness, modelling, software reliability, operating systems, parallel processing, and real-time processing. These areas often involve widely diverse core expertise ranging from formal logic, mathematics of stochastic modelling, graph theory, hardware design and software engineering. Recent developments include the adaptation of existing fault-tolerance techniques to RAID disks where information is striped across several disks to improve bandwidth and a redundant disk is used to hold encoded information so that data can be reconstructed if a disk fails. Another area is the use of application-based fault-tolerance techniques to detect errors in high performance parallel processors. Fault-tolerance techniques are expected to become increasingly important in deep sub-micron VLSI devices to combat increasing noise problems and improve yield by tolerating defects that are likely to occur on very large, complex chips. Fault-tolerant computing already plays a major role in process control, transportation, electronic commerce, space, communications and many other areas that impact our lives. Many of its next advances will occur when applied to new state-of-the-art systems such as massively parallel scalable computing, promising new unconventional architectures such as processor-in-memory or reconfigurable computing, mobile computing, and the other exciting new things that lie around the corner. Basic Concepts Hardware Fault-Tolerance – The majority of fault-tolerant designs have been directed toward building computers that automatically recover from random faults occurring in hardware components. The techniques employed to do this generally involve partitioning a computing system into modules that act as fault-containment regions. Each module is backed up with protective redundancy so that, if the module fails, others can assume its function. Special mechanisms are added to detect errors and implement recovery. Two general approaches to hardware fault recovery have been used: 1) fault masking, and 2) dynamic recovery. Fault masking is a structural redundancy technique that completely masks faults within a set of redundant modules. A number of identical modules execute the same functions, and their outputs are voted to remove errors created by a faulty module. Triple modular redundancy (TMR) is a commonly used form of fault masking in which the circuitry is triplicated and voted. The voting circuitry can also be triplicated so that individual voter failures can also be corrected by the voting process. A TMR system fails whenever two modules in a redundant triplet create errors so that the vote is no longer valid. Hybrid redundancy is an extension of TMR in which the triplicated modules are backed up with additional spares, which are used to replace faulty modules -allowing more faults to be tolerated. Voted systems require more than three times as much hardware as non-redundant systems, but they have the advantage that computations can continue without interruption when a fault occurs, allowing existing operating systems to be used. Dynamic recovery is required when only one copy of a computation is running at a time (or in some cases two unchecked copies), and it involves automated self-repair. As in fault masking, the computing system is partitioned into modules backed up by spares as protective redundancy. In the case of dynamic recovery however, special mechanisms are required to detect faults in the modules, switch out a faulty module, switch in a spare, and instigate those software actions (rollback, initialization, retry, and restart) necessary to restore and continue the computation. In single computers special hardware is required along with software to do this, while in multicomputers the function is often managed by the other processors. Dynamic recovery is generally more hardware-efficient than voted systems, and it is therefore the approach of choice in resource-constrained (e.g., low-power) systems, and especially in high performance scalable systems in which the amount of hardware resources devoted to active computing must be maximized. Its disadvantage is that computational delays occur during fault recovery, fault coverage is often lower, and specialized operating systems may be required. Software Fault-Tolerance – Efforts to attain software that can tolerate software design faults (programming errors) have made use of static and dynamic redundancy approaches similar to those used for hardware faults. One such approach, N-version programming, uses static redundancy in the form of independently written programs (versions) that perform the same functions, and their outputs are voted at special checkpoints. Here, of course, the data being voted may not be exactly the same, and a criterion must be used to identify and reject faulty versions and to determine a consistent value (through inexact voting) that all good versions can use. An alternative dynamic approach is based on the concept of recovery blocks. Programs are partitioned into blocks and acceptance tests are executed after each block. If an acceptance test fails, a redundant code block is executed. An approach called design diversity combines hardware and software fault-tolerance by implementing a fault-tolerant computer system using different hardware and software in redundant channels. Each channel is designed to provide the same function, and a method is provided to identify if one channel deviates unacceptably from the others. The goal is to tolerate both hardware and software design faults. This is a very expensive technique, but it is used in very critical aircraft control applications. The key technologies that make software fault-tolerant Software involves a system’s conceptual model, which is easier than a physical model to engineer to test for things that violate basic concepts. To the extent that a software system can evaluate its own performance and correctness, it can be made fault-tolerant—or at least error aware; to the extent that a software system can check its responses before activating any physical components, a mechanism for improving error detection, fault tolerance, and safety exists. We can use three key technologies—design diversity, checkpointing, and exception handling—for software fault tolerance, depending on whether the current task should be continued or can be lost while avoiding error propagation (ensuring error containment and thus avoiding total system failure). Tolerating solid software faults for task continuity requires diversity, while checkpointing tolerates soft software faults for task continuity. Exception handling avoids system failure at the expense of current task loss. Runtime failure detection is often accomplished through an acceptance test or comparison of results from a combination of â€Å"different† but functionally equivalent system alternates, components, versions, or variants. However, other techniques— ranging from mathematical consistency checking to error coding to data diversity—are also useful. There are many options for effective system recovery after a problem has been detected. They range from complete rejuvenation (for example, stopping with a full data and software reload and then restarting) to dynamic forward error correction to partial state rollback and restart. The relationship between software fault tolerance and software safety Both require good error detection, but the response to errors is what differentiates the two approaches. Fault tolerance implies that the software system can recover from —or in some way tolerate—the error and continue correct operation. Safety implies that the system either continues correct operation or fails in a safe manner. A safe failure is an inability to tolerate the fault. So, we can have low fault tolerance and high safety by safely shutting down a system in response to every detected error. It is certainly not a simple relationship. Software fault tolerance is related to reliability, and a system can certainly be reliable and unsafe or unreliable and safe as well as the more usual combinations. Safety is intimately associated with the system’s capacity to do harm. Fault tolerance is a very different property. Fault tolerance is—together with fault prevention, fault removal, and fault forecasting— a means for ensuring that the system function is implemented so that the dependability attributes, which include safety and availability, satisfy the users’ expectations and requirements. Safety involves the notion of controlled failures: if the system fails, the failure should have no catastrophic consequence—that is, the system should be fail-safe. Controlling failures always include some forms of fault tolerance—from error detection and halting to complete system recovery after component failure. The system function and environment dictate, through the requirements in terms of service continuity, the extent of fault tolerance required. You can have a safe system that has little fault tolerance in it. When the system specifications properly and adequately define safety, then a well-designed fault-tolerant system will also be safe. However, you can also have a system that is highly fault tolerant but that can fail in an unsafe way. Hence, fault tolerance and safety are not synonymous. Safety is concerned with failures (of any nature) that can harm the user; fault tolerance is primarily concerned with runtime prevention of failures in any shape or form (including prevention of safety critical failures). A fault-tolerant and safe system will minimize overall failures and ensure that when a failure occurs, it is a safe failure. Several standards for safety-critical applications recommend fault tolerance—for hardware as well as for software. For example, the IEC 61508 standard (which is generic and application sector independent) recommends among other techniques: â€Å"failure assertion programming, safety bag technique, diverse programming, backward and forward recovery.† Also, the Defense standard (MOD 00-55), the avionics standard (DO-178B), and the standard for space projects (ECSS-Q-40- A) list design diversity as possible means for improving safety. Usually, the requirement is not so much for fault tolerance (by itself) as it is for high availability, reliability, and safety. Hence, IEEE, FAA, FCC, DOE, and other standards and regulations appropriate for reliable computer-based systems apply. We can achieve high availability, reliability, and safety in different ways. They involve a proper reliable and safe design, proper safeguards, and proper implementation. Fault tolerance is just one of the techniques that assure that a system’s quality of service (in a broader sense) meets user needs (such as high safety). History The SAPO computer built in Prague, Czechoslovakia was probably the first fault-tolerant computer. It was built in 1950–1954 under the supervision of A. Svoboda, using relays and a magnetic drum memory. The processor used triplication and voting (TMR), and the memory implemented error detection with automatic retries when an error was detected. A second machine developed by the same group (EPOS) also contained comprehensive fault-tolerance features. The fault-tolerant features of these machines were motivated by the local unavailability of reliable components and a high probability of reprisals by the ruling authorities should the machine fail. Over the past 30 years, a number of fault-tolerant computers have been developed that fall into three general types: (1) long-life, un-maintainable computers, (2) ultra dependable, real-time computers, and (3) high-availability computers. Long-Life, Unmaintained Computers Applications such as spacecraft require computers to operate for long periods of time without external repair. Typical requirements are a probability of 95% that the computer will operate correctly for 5–10 years. Machines of this type must use hardware in a very efficient fashion, and they are typically constrained to low power, weight, and volume. Therefore, it is not surprising that NASA was an early sponsor of fault-tolerant computing. In the 1960s, the first fault-tolerant machine to be developed and flown was the on-board computer for the Orbiting Astronomical Observatory (OAO), which used fault masking at the component (transistor) level. The JPL Self-Testing-and-Repairing (STAR) computer was the next fault-tolerant computer, developed by NASA in the late 1960s for a 10-year mission to the outer planets. The STAR computer, designed under the leadership of A. Avizienis was the first computer to employ dynamic recovery throughout its design. Various modules of the computer were instrumented to detect internal faults and signal fault conditions to a special test and repair processor that effected reconfiguration and recovery. An experimental version of the STAR was implemented in the laboratory and its fault tolerance properties were verified by experimental testing. Perhaps the most successful long-life space application has been the JPL-Voyager computers that have now operated in space for 20 years. This system used dynamic redundancy in which pairs of redundant computers checked each-other by exchanging messages, and if a computer failed, its partner could take over the computations. This type of design has been used on several subsequent spacecraft. Ultra-dependable Real-Time Computers These are computers for which an error or delay can prove to be catastrophic. They are designed for applications such as control of aircraft, mass transportation systems, and nuclear power plants. The applications justify massive investments in redundant hardware, software, and testing. One of the first operational machines of this type was the Saturn V guidance computer, developed in the 1960s. It contained a TMR processor and duplicated memories (each using internal error detection). Processor errors were masked by voting, and a memory error was circumvented by reading from the other memory. The next machine of this type was the Space Shuttle computer. It was a rather ad-hoc design that used four computers that executed the same programs and were voted. A fifth, non-redundant computer was included with different programs in case a software error was encountered. During the 1970s, two influential fault-tolerant machines were developed by NASA for fuel-efficient aircraft that require continuous computer control in flight. They were designed to meet the most stringent reliability requirements of any computer to that time. Both machines employed hybrid redundancy. The first, designated Software Implemented Fault Tolerance (SIFT), was developed by SRI International. It used off-the-shelf computers and achieved voting and reconfiguration primarily through software. The second machine, the Fault-Tolerant Multiprocessor (FTMP), developed by the C. S. Draper Laboratory, used specialized hardware to effect error and fault recovery. A commercial company, August Systems, was a spin-off from the SIFT program. It has developed a TMR system intended for process control applications. The FTMP has evolved into the Fault-Tolerant Processor (FTP), used by Draper in several applications and the Fault-Tolerant Parallel processor (FTPP) – a parallel processor that allows processes to run in a single machine or in duplex, tripled or quadrupled groups of processors. This highly innovative design is fully Byzantine resilient and allows multiple groups of redundant processors to be interconnected to form scalable systems. The new generation of fly-by-wire aircraft exhibits a very high degree of fault-tolerance in their real-time flight control computers. For example the Airbus Airliners use redundant channels with different processors and diverse software to protect against design errors as well as hardware faults. Other areas where fault-tolerance is being used include control of public transportation systems and the distributed computer systems now being incorporated in automobiles. High-Availability Computers Many applications require very high availability but can tolerate an occasional error or very short delays (on the order of a few seconds), while error recovery is taking place. Hardware designs for these systems are often considerably less expensive than those used for ultra-dependable real-time computers. Computers of this type often use duplex designs. Example applications are telephone switching and transaction processing. The most widely used fault-tolerant computer systems developed during the 1960s were in electronic switching systems (ESS) that are used in telephone switching offices throughout the country. The first of these AT&T machines, No. 1 ESS, had a goal of no more than two hours downtime in 40 years. The computers are duplicated, to detect errors, with some dedicated hardware and extensive software used to identify faults and effect replacement. These machines have since evolved over several generations to No. 5 ESS which uses a distributed system controlled by the 3B20D fault tolerant computer. The largest commercial success in fault-tolerant computing has been in the area of transaction processing for banks, airline reservations, etc. Tandem Computers, Inc. was the first major producer and is the current leader in this market. The design approach is a distributed system using a sophisticated form of duplication. For each running process, there is a backup process running on a different computer. The primary process is responsible for checkpointing its state to duplex disks. If it should fail, the backup process can restart from the last checkpoint. Stratus Computer has become another major producer of fault-tolerant machines for high-availability applications. Their approach uses duplex self-checking computers where each computer of a duplex pair is itself internally duplicated and compared to provide high-coverage concurrent error detection. The duplex pair of self-checking computers is run synchronously so that if one fails, the other can continue the computations without delay. Finally, the venerable IBM mainframe series, which evolved from S360, has always used extensive fault-tolerance techniques of internal checking, instruction retries and automatic switching of redundant units to provide very high availability. The newest CMOS-VLSI version, G4, uses coding on registers and on-chip duplication for error detection and it contains redundant processors, memories, I/O modules and power supplies to recover from hardware faults – providing very high levels of dependability. The server market represents a new and rapidly growing market for fault-tolerant machines driven by the growth of the Internet and local networks and their needs for uninterrupted service. Many major server manufacturers offer systems that contain redundant processors, disks and power supplies, and automatically switch to backups if a failure is detected. Examples are SUN’s ft-SPARC and the HP/Stratus Continuum 400. Other vendors are working on fault-tolerant cluster technology, where other machines in a network can take over the tasks of a failed machine. An example is the Microsoft MSCS technology. Information on fault-tolerant servers can readily be found in the various manufacturers’ web pages. Conclusion Fault-tolerance is achieved by applying a set of analysis and design techniques to create systems with dramatically improved dependability. As new technologies are developed and new applications arise, new fault-tolerance approaches are also needed. In the early days of fault-tolerant computing, it was possible to craft specific hardware and software solutions from the ground up, but now chips contain complex, highly-integrated functions, and hardware and software must be crafted to meet a variety of standards to be economically viable. Thus a great deal of current research focuses on implementing fault tolerance using COTS (Commercial-Off-The-Shelf) technology. References Avizienis, A., et al., (Ed.). (1987):Dependable Computing and Fault-Tolerant Systems Vol. 1: The Evolution of Fault-Tolerant Computing, Vienna: Springer-Verlag. (Though somewhat dated, the best historical reference available.) Harper, R., Lala, J. and Deyst, J. (1988): â€Å"Fault-Tolerant Parallel Processor Architectural Overview,† Proc of the 18st International Symposium on Fault-Tolerant Computing FTCS-18, Tokyo, June 1988. (FTPP) 1990. Computer (Special Issue on Fault-Tolerant Computing) 23, 7 (July). Lala, J., et. al., (1991): The Draper Approach to Ultra Reliable Real-Time Systems, Computer, May 1991. Jewett, D., A (1991): Fault-Tolerant Unix Platform, Proc of the 21st International Symposium on Fault-Tolerant Computing FTCS-21, Montreal, June 1991 (Tandem Computers) Webber, S, and Jeirne, J.(1991): The Stratus Architecture, Proc of the 21st International Symposium on Fault-Tolerant Computing FTCS-21, Montreal, June 1991. Briere, D., and Traverse, P. (1993): AIRBUS A320/ A330/A340 Electrical Flight Controls: A Family of Fault-Tolerant Systems, Proc. of the 23rd International Symposium on Fault-Tolerant Computing FTCS-23, Toulouse, France, IEEE Press, June 1993. Sanders, W., and Obal, W. D. II, (1993): Dependability Evaluation using UltraSAN, Software Demonstration in Proc. of the 23rd International Symposium on Fault-Tolerant Computing FTCS-23, Toulouse, France, IEEE Press, June 1993. Beounes, C., et. al. (1993): SURF-2: A Program For Dependability Evaluation Of Complex Hardware And Software Systems, Proc. of the 23rd International Symposium on Fault-Tolerant Computing FTCS-23, Toulouse, France, IEEE Press, June 1993. Blum, A., et. al., Modeling and Analysis of System Dependability Using the System Availability Estimator, Proc of the 24th International Symposium on Fault-Tolerant Computing, FTCS-24, Austin TX, June 1994. (SAVE) Lala, J.H. Harper, R.E. (1994): Architectural Principles for Safety-Critical Real-Time Applications, Proc. IEEE, V82 n1, Jan 1994, pp25-40. Jenn, E. , Arlat, J. Rimen, M., Ohlsson, J. and Karlsson, J. (1994): Fault injection into VHDL models:the MEFISTO tool, Proc. Of the 24th Annual International Symposium on Fault-Tolerant Computing (FTCS-24), Austin, Texas, June 1994. Siewiorek, D., ed., (1995): Fault-Tolerant Computing Highlights from 25 Years, Special Volume of the 25th International Symposium on Fault-Tolerant Computing FTCS-25, Pasadena, CA, June 1995. (Papers selected as especially significant in the first 25 years of Fault-Tolerant Computing.) Baker, W.E, Horst, R.W., Sonnier, D.P., and W.J. Watson, (1995): A Flexible ServerNet-Based Fault-Tolerant Architecture, Pr oc of the 25th International Symposium on Fault-Tolerant Computing FTCS-25, Pasadena, CA, June 1995. (Tandem) Timothy, K. Tsai and Ravishankar K. Iyer, (1996): â€Å"An Approach Towards Benchmarking of Fault-Tolerant Commercial Systems,† Proc. 26th Symposium on Fault-Tolerant Computing FTCS-26, Sendai, Japan, June 1996. (FTAPE) Kropp Nathan P., Philip J. Koopman, Daniel P. Siewiorek(1998):, Automated Robustness Testing of Off-the-Shelf Software Components, Proc of the 28th International Symposium on Fault-Tolerant Computing , FTCS’28, Munich, June, 1998. (Ballista). Spainhower, l., and T.A.Gregg, (1998):G4: A Fault-Tolerant CMOS Mainframe Proc of the 28th International Symposium on Fault-Tolerant Computing FTCS-28, Munich, June 1998. (IBM). Kozyrakis, Christoforos E., and David Patterson, A New Direction for Computer Architecture Research, Computer, Vol. 31, No. 11, November 1998.

Monday, September 16, 2019

Persuasive Paper on Video Game Violence Essay

=Today’s youth live in a time where video games are a fixture of entertainment. Video game consoles are found in almost every home, including a child’s bedroom. I believe that the portrayal of violence in video games is not the reason for the increase of violent acts committed by and against youth. Parents and the government should understand it is not the fault of the game itself. Modern parents should be engaged in the messages their children are receiving from video games and the images they are allowing them to witness. Creating more laws and legislations surrounding the sale and content of video games will not provide the protection that parents think they will. I believe that parents who refuse to engage in the content their children are exposed to must educate themselves actively and be aware of what their children are observing when they play video games. They need to actively seek out information about the game and what types of content it contains before their children start to play. Parents should not solely rely on the rating provided by the Entertainment Software Rating Board (ESRB), an independent board that provides ratings to video games. Games with the rating of Everyone, or â€Å"E†, contain mild violence. According to a study from the Journal of the American Medical Association where 55 video games were played, â€Å"27 games (49%) depicted deaths from violence† (Thompson and Haninger). Parents who do not take the time to learn about a game first risk their children killing in the game that is rated for â€Å"Everyone†. Children need their parents to talk with them and explain that what they are seeing is not real and that violence like that is not appropriate behavior. A study by the American Psychological Association found that game players self-reported that â€Å"game playing was found to elicit more fear than anger, depressed feeling, or pleasant relaxation, and respectively; however it elicited more joy than fear† (Ravaja, Saari and Turpeinen). Desire to commit violence was not one of them. Parents also need to set clear boundaries on what is appropriate and what is  not for their children, based on their own beliefs. The violence portrayed in video games exists without a call to action. The games do not command players to go outside of the game and commit the same acts. It is also not the duty of lawmakers to limit accessibility or ban content all together because they fear that the violence could incite an incident. The British Medical Journal originally published findings from the United Kingdom Millennium Cohort Study. The study was conducted over 10 years and included more than 11,000 children. It â€Å"did not find associations between electronic games use and conduct problems, which could reflect the lower exposure to games and/or greater parental restrictions on age-appropriate content for games† (Parkes, Sweeting and Wright). Parents should determine what is right for their children and what is not. The boundaries of every family are different and need to be enforced by the parents. The creators and retailers of video games often become the scapegoat for lawmakers and government officials when a violent act occurs that involves or is perpetrated by youth. Parents rely on their legislators to take up their causes and seek out laws that will promote their cause. Regulating video games on their behalf is one of those causes. Legislative bodies across the country are looking for ways to prevent incidences of violence, especially gun violence like what occurred in Sandy Hook Elementary in Newton, Connecticut and the movie theater in Aurora, Colorado. The state of New Jersey outlined a plan last year that included measures to limit and restrict how retailers merchandise games in retail outlets and would require parental consent for kids to purchase games rated â€Å"Mature† or â€Å"Adults Only† ( Friedman). The state of Massachusetts   also considered legislation that would assemble a group to â€Å"investigate the influence of violent video games and to find if there is a connection with real world violence† (GamePolitics Staff). However, these, and other laws being debated across the country, face a significant legal road block. Video game retailers already take precautions and preventative measures to keep certain games from being purchased by children and further regulation on a legal level is not needed. The Supreme Court heard Brown v. EMA, a case against California’s laws that restricted the sale of certain games to teenagers based on state’s determination that they were violent. The basis  of the case came down to a First Amendment issue because California’s specifically singled out video games and no other form of media. The Court struck down California’s law and ruled â€Å"the games, like books and movies, are protected under the First Amendment’s guarantee of freedom of speech. The Supreme Court also said it found no convincing link between the games and real world violence† (Friedman). Justice Antonin Scalia stated, â€Å"Psychological studies purporting to show a connection between exposure to violent video games and harmful effects on children do not prove that such exposure causes minors to act aggressively† (Friedman). Regulation by the government is a clear-cut defense for parents who battle with their kids about certain games being purchased and played. It is easier to tell a child that they cannot have something because someone else restricts it and not because the parent forbids it. It means the child is not upset with the parent and diverts their displeasure. Parents do not have to be the â€Å"bad guy† because a law takes care of that for them. I have personally witnessed parents telling kids that they cannot purchase a particular game because it is too graphic or not for their age. Most of the children are less than pleased by the response and show it. I imagine most parents want to avoid that reaction from their child in a store. Creating legislation that the Supreme Court found infringes on the collective’s First Amendment rights or circumventing the current self-regulation of the video   game retailers is not the solution. Today’s parents should stop seeking a solution for interference outside of their own decisions as a parent by increasing legislation on games. Parents to the next generation are severely taxed by the demands of day to day life. The one item that they cannot be relaxed about is the entertain they choose for their children. Buying a video game console and unleashing a child into the world of gaming is almost a rite of passage for parents, especially parents that grew up playing Super Mario Bros. It is unwise to do so without rules, boundaries, and some due diligence on their part. They should be educating themselves on the games and need to be reviewing game content information available from web sites like IGN.com. Parents should be supervising their kids playing the games that they may not be familiar with yet many do not. They should also be looking at what they can control in their own home, including utilizing  parental control settings on the consoles themselves and restricting online and downloadable content. Parents should not lean on lawmakers to establish those confines for them nor does not lie in society’s hands. The ultimate responsibility lies with parents who are willing to unplug what video game content they do not want their child to play. Works Cited Friedman, Matt. â€Å"Game over? Christie’s plan to restrict video games would likely be overturned, experts say.† 24 April 2013. NJ.com. Web. 3 March 2014. GamePolitics Staff. Massachusetts State Senator Proposes Study on Violent Video Games. 14 November 2013. Web. 3 March 2014. Parkes, Alison, et al. â€Å"Do television and electronic games predict children’s psychosocial adjustment? Longitudinal research using the UK Millennium Cohort Study.† British Medical Journal (2013). Web. Ravaja, Niklas, et al. â€Å"The Psychophysiology of James Bond: Phasic Emotional Responses to Violent Video Game Events.† American Psychological Association (2008): Vol. 8, No. 1, 114-120. Thompson, ScD, Kimberly M. and Kevin Haninger. â€Å"Violence in E-Rated Video Games.† Journal of the American Medical Association (2001). Web.