Thursday, October 31, 2019

Finanical Accounting Concepts Phase 2 DB Essay Example | Topics and Well Written Essays - 500 words

Finanical Accounting Concepts Phase 2 DB - Essay Example This is also the reason why the expense incurred on Furniture is not shown on the income statement, The expense incurred on the furniture ($1500) will be shown in reduced amounts for a period of years in subsequent income statements. The Income Statement reflects the inflow and outflow of expenses and revenues. However, the Owner’s equity statement is a statement that reflects the position of the owner’s capital in the business. Therefore it reflects the shares position, the kinds of shares (common, preferred or deferred preferred shares) and their distribution. However, a balance sheet is a comprehensive statement that reflects the financial position of a firm at the end of a financial year. While the Income statement shows mainly operating expenses and net revenues, the balance sheet is a summary of a Company’s entire financial condition at any given point of time, therefore it includes all assets of the Company, all liabilities and net worth. This statement also takes into account the ownership of non fluid assets and share ownership in the Company. Therefore, in a balance sheet, the accounting equation would be: Assets – Liabilities + Owner’s Equity. The Accounting Cycle includes the process of recording entries of receipts and expenses in a journal in chronological order according to dates. These entries are then posted to the ledger, both under debits and credits, under the appropriate account heading. The trial balance is then prepared to ensure that all debits are equal to all credits. Any errors or discrepancies are then resolved with individual journal entries, in order to yield the adjusted trial balance. This forms the basis for the preparation of the various kinds of financial statements, such as cash statements, income statements and balance sheets. The final steps in preparing these statements are in the posting of the closing entries and the preparation of the final

Monday, October 28, 2019

Network Design Essay Example for Free

Network Design Essay The objective at hand was to build a network from the ground up. This was accomplished by breaking down all of the sections and building upon all previous assignments. This was a good course as I learned a lot about all of the different sections of building a network. The pros are now I know how to build a network on the design side from the ground up. I learned quite a bit about using a lot of the technologies associated with networking and it allowed me to learn quite a few new concepts. Some of the downfalls about this course and what I have learned are I did not feel I accomplished much as there is no hands on training associated with the course. I do not feel like concepts and design ideas are a great resource to actually learn how to use any of the systems but they do give a pretty good idea. Cabling SpecificationsEthernet is a Local Area Network (LAN) technology with a transmission rate of 10 Mbps and has a typical star topology. Computers and devices must wait-and-listen for transmission time on the network as only one device can transmit at any one time. In order to operate with this network strategy, Ethernet incorporates CSMA/CD (Carrie Sense Multiple Access with Collision Detection). Each device on the network listens for the network to be clear before transmitting data. If more than one computer or device transmits data at the same time, then collisions occur. Once collisions are detected, all devices stop transmitting for a period of time until one of the devices senses the line is free and will then gain control of the line to transmit its data. Receiving devices just sit there waiting and listening for transmissions that are meant for them, which are determined by an IP (Internet Protocol) address. The main advantage to Ethernet is it is one of the cheapest networks to put into service. Compared to other hardware for Token Ring, Ethernet equipment such as hubs, switches, network interface cards, and cable (Cat5 common) is inexpensive. The main disadvantage to Ethernet is related to the collisions that occur on the network. Even though Ethernet cable (Cat5) is fairly inexpensive, it can become a cost issue if designing a large network as each device or computer requires its own cable connection to the central hub. Another disadvantage is distance limitation for node connections. The longest connection that can occur within an Ethernet network without a repeater is 100 meters. Todays Ethernet standards, 100 Mbps and 1000 Mbps, incorporate switched technology, which for the most part, eliminates collisions on the network. The IEEE (Institute of Electrical and Electronics Engineers) specification for Ethernet is 802.3 with three-part names designating the different types. For example, 10BASE-T is for 10 Mbps, and 100BASE-TX is for 100 Mbps. Token RingToken was developed by IBM as an alternative to Ethernet. The network is physically wired in star topology, but is arranged in a logical ring. Instead of a hub or switch like in an Ethernet network, a MAU (Multistation Access Unit) is used. Access to the network is controlled by possession of a token that is passed around the ring from computer to computer as data can only travel in one direction at a time. A computer that wishes to transmit data on the network takes possession of the token and replaces the token frame with data. The data goes around the ring and returns to the transmitting computer, which removes the data, creates a new token, and then forwards it to the next computer. The IEEE specification for Token Ring is 802.5 and it comes in two different speeds: 4 Mbps and 16 Mbps. The main advantage to Token Ring is there are never any collisions within the network, which makes it a highly reliable solution for high-traffic networks. The disadvantage to Token Ring is the network cards and MAU are more expensive than equivalent Ethernet hardware. FDDIFDDI (Fiber-Distributed Data Interface) is an architecture designed for high-speed backbones that operate at 100 Mbps, which are used to connect and extend LANs. A ring topology is used with two fiber optic cable rings. It  passes a token on both rings and in opposite directions. The specification for FDDI is designated by the American National Standards Institute as ANSI X3T9.5. The advantage to FDDI is that it uses two rings for protection in case one ring breaks. When a break occurs, data is rerouted in the opposite direction using the other ring. It is also considered reliable because it uses a token-passing strategy. The disadvantage to FDDI is the expensive network cards and fiber optic cable. In addition, the amount of fiber optic cable is doubled because it has redundant rings. WirelessLocal Area Network (LAN) TopologiesA mesh topology has a point-to-point connection to every other device (node) within the topology. The point-to-point link is dedicated between each device so it will only carry traffic to the two devices that is connected by that link. The advantage of a mesh topology is it works on the concept of routes, which means that traffic can take one of several paths between the source and destination. The network is also robust in that it will not be crippled if one path becomes unavailable or unstable due to each device being connected to every other device. The Internet uses a mesh topology to operate efficiently. The main disadvantage to a mesh topology is the fact that it requires a large number of cables, which is very expensive. A bus topology is a multipoint topology that entails each device being connected to a common link or path. The common link can be thought of as the backbone to the network. All devices typically connect to the backbone with a T-connector and coax cable. The main advantages of a bus topology are that it is easy to install and is not expensive (cost effective) because it uses very little cable to build. The main disadvantage is if there is a problem with the one backbone cable, then the entire network will no longer have the ability to communicate.  These networks are also very difficult to troubleshoot because any small problem such as a cable break, loose connector, or cable short can cause the outage. The entire length of cable and each connector must be inspected during troubleshooting. Another disadvantage is the lack of amplification of the signal, which results in a limited network size based on the characteristics of the cable because of how far a signal can travel down that cable. A ring topology means that each device is connected in a ring, or daisy-chain fashion, one after another. A dedicated connection only exists between a device and the device on each side of it. Data flows around the ring in one direction. Each device contains a repeater that regenerates the signal before passing it to the next device. The main advantage of a ring topology is that it is easy to install. One disadvantage includes difficulty to troubleshoot because data flows in one direction and it could take time to find the faulty device when there are problems. The entire network could be taken off line if there is a faulty device or cable break within the ring. The star topology has each device in the network connected to a central device called a hub, which can actually be a hub or switch. All traffic must pass through the hub in order to communicate with any other device on the network. There is no direct communication between devices like in a mesh topology. One advantage to a star topology is any failure to one cable or device connected to the hub will not bring the entire network down. Repairs can be done to individual nodes without disrupting traffic flow. Another advantage is expandability of the network. Additional devices can be added to the network without disrupting any of the current users. All that is required is an additional cable run from the device to the hub. One disadvantage includes cable costs because each device must have its own cable connected back to the hub. The other disadvantage is the hub itself.  Since all traffic runs through one device, it becomes the single point of failure. If the hub goes down, so does the entire network. Wide Area Network (WAN) DesignA WAN, also known as a Wide Area Network, is an essential part to bigger corporate networks most government networks and companies with multiple sites as well. A WAN, basically, is 2 or more LANs (Local Area Networks) stuck together and running as one big network over a big geographical area. Although a WAN could cover very small distances, most WANs cover much larger geographical areas such as a country or possibly even the world. The largest WAN today would technically be the internet or the World Wide Web. The internet is, in short, one giant WAN because it consists of many smaller LANs and servers. Most WANs can cover a fairly large geographical area, but some, such as the World Wide Web can cover the globe. The United States Government has quite a big WAN as a lot of their LANs are in other countries. They need to get data from one place to another almost instantaneously, and this is one of the quickest and easiest ways to be able to do so. To be able to get on the internet, a subscriber must go through an ISP (Internet Service Provider) and they will give the subscriber access to the internet for a certain price every month. There are different ways to get access to the internet depending on the geographical location in which you live. A subscriber can go through dial up, which is one of the slowest methods, but it is also one of the most common. There is also DSL (Digital Subscriber Line) through most phone companies if they have access in the area and cable which is usually one of the fastest and most expensive methods to access the internet. The last common method is using a satellite to obtain access. This is usually the most expensive ways to access the internet because the equipment usually needs to be bought. When talking about telephone lines, we start getting into analog versus digital signals and degradation over longer distances. A telephone system works on analog signals. These work by a computer transmitting a digital  signal to the modem which converts the signal into an analog signal (this is the beeping heard when a computer dials up to access the internet) and later being converted by a different computer back into a digital signal with the use of a modem. DSL is digital all the way, along with T1 and T3 lines. When using DSL or T1/T3 lines, a filter of some sort is used to filter out the digital and analog signals, so the phone and computer are receiving different signals. Companies usually use faster lines to access the internet or to have access to their other sites. Smaller companies can use DSL or Cable internet services, but when talking about larger corporations or the government, most use public systems such as telephone lines or satellites. Usually, when talking about larger companies and going through a public system, we are talking much faster speeds that can hold many more users. T1 and T3 lines are usually used, satellites are commonly used and fiber-optic is becoming much more common. When getting into many users on a WAN, we need to start talking about Network Latency. According to Javvin.com network latency is defined as “latency is a measure of how fast a network is running. The term refers to the time elapsed between the sending of a message to a router and the return of that message (even if the process only takes milliseconds, slowdowns can be very apparent over multi-user networks). Latency problems can signal network-wide slowdowns, and must be treated seriously, as latency issues cause not only slow service but data losses as well. At the user level, latency issues may come from software malfunctions; at the network level, such slowdowns may be a result of network overextension or bottlenecking, or DoS or DDoS activity.”Dos or DDos stands for Denial of Service and Distributed Denial of Service respectively. These types of attacks are usually by hackers or someone who does not want others to access a certain service. There was a recent DoS threat on the CNN webpage as some hackers wanted CNN to stop talking about a certain issue. This works by one or multiple people talking all of the networks latency or bandwidth from them and thus causing other not to be able to access their site or services. There are other issues that may slow down a users PC as well. Not all issues revolve around hacker attacks. A lot of problems could be caused by malicious software, such as, Spyware, Malware, Viruses, or other programs that may be problematic. These can usually be taken care of by installing anti-virus software or even a spyware removal tool. The issue here is instead of the malicious software causing slowdowns on a PC, there are slowdowns due to the software protecting a certain computer in the background. Sometimes a simple fix to this problem is to defragment a hard drive. This can tremendously speed up a PC, because the files will be closer together and easier and quicker to access. On a network, a simple way to test latency is to use the trace route program. To do this, simply go to a command prompt and type tracert and then an IP address if internal or a website if external. This will send out packets of information and check how much time has passed to receive a packet back. The time passed would be the latency time. Usually it says it only took a certain amount of milliseconds which does not seem like very much time, but it was only a tiny packet of information. The higher the milliseconds the higher the latency time. The higher the latency time, the longer it will take to do anything in a network. If a high latency time is present, there is bound to be lag somewhere down the line. In a WAN, the equipment that will be used is as follows. In each LAN there will be PCs connected to a router somewhere (this is a ring topology example) and that router should be connected into a switch. There may be more but this is a basic example. Each of these LANs then connects to a central HUB somewhere which should interconnect all of the LANs. All of the information then travels to the central hub which is then separated out to the correct switch, router and then PC. There are usually central servers that can store and backup all of the data on the network as well, but this was an example of a crude network. Most companies also a very repetitious and redundant with their WANs. This is because they do not want a central failure point to bring the entire company to itÂ’s knees. There are usually multiple switches that can tie the  entire system together. If a huge corporations Wan decided to fail, the company could lose a few million dollars in a matter of minutes. This is the main reason redundancy in this situation makes more than enough sense. A lot of companies use software called VPN software. This software will let users login from the outside into their computer inside the company. This is a very nice system because if an employee needs to do work from home, they have access to everything they working on onsite. This is also helpful from an Information Technology perspective as it allows the Tech who is working on a remote problem login remotely and find out what the issue is, make any configuration changes and fix most software related issues without actually having to be onsite. This works well when being on call from an offsite location. There are other software packages that work well too. A lot of companies use PCAnywhere to do this type of work and Bomgar is another solution to be able to remotely login. A WAN is an imperative part to any corporation, government agency or company with multiple locations, as it allows them to transfer data quickly, easily and over great distances at the click of a button. There seems to be more and more need for employees in the networking field today, because more and more corporations need to transfer data quicker and easier. There will be new technology soon that will improve our current technology such as fiber optic. Network ProtocolsThere are many solutions to remote access and the most common and one of the most cost efficient methods is the VPN (Virtual Private Network). VPN technology is already built in to most operating systems and is very easy to implement. With bigger environments and corporations, a consideration for concentrated VPN hardware should be in place because of the simultaneous users and stress on the servers. There are a few different types of VPN including IPsec, PPTP and SSL. Once the connection from remote access has been made, you need to make sure the files are readily accessible for the user logging in remotely. One way to do so is to use Samba which is an open source file access system. There  are other ways to allow access as well. Using remote desktop connection, the user has the ability to log directly in to their PC and use it as if they were sitting at their desk, rather than away from the company. A lot of companies use software called VPN software. This software will let users login from the outside into their computer inside the company. This is a very nice system because if an employee needs to do work from home, they have access to everything they working on onsite. This is also helpful from an Information Technology perspective as it allows the Tech who is working on a remote problem login remotely and find out what the issue is, make any configuration changes and fix most software related issues without actually having to be onsite. This works well when being on call from an offsite location. There are other software packages that work well too. A lot of companies use PCAnywhere to do this type of work and Bomgar is another solution to be able to remotely login. Network Remote AccessMost companies need to be able to access their work from many locations, including home and while traveling. The solution that allows them to access the network is one of two ways to access their network. The first is through a VPN (virtual private network) that allows the user access to remotely log in easily and quickly. The other way is through a dial up remote connection; this way is a bit easier to set up but can become very costly in the long run. The problem with being able to do this is it can be very costly and can eat up much of the IT departments time to set up, configure and implement this system into the current hardware. The definition from whatis.com about a VPN is “ virtual private network (VPN) is a network that uses a public telecommunication infrastructure, such as the Internet, to provide remote offices or individual users with secure access to their organizations network. A virtual private network can be contrasted with an expensive system of owned or leased lines that can only be used by one organization. The goal of a VPN is to provide the organization with the same capabilities, but at a much lower cost. VPN works by using the shared public infrastructure while maintaining privacy through security procedures and tunneling protocols such as the Layer Two Tunneling  Protocol (L2TP). In effect, the protocols, by encrypting data at the sending end and decrypting it at the receiving end, send the data through a tunnel that cannot be entered by data that is not properly encrypted. An additional level of security involves encrypting not only the data, but also the originating and receiving network addresses.”A VPN, also known as a Virtual Private Network is a helpful tool that allows users of a specific domain to be able to log in to their PC from anywhere in the world with the help of another PC. With this tool, they would log in with a special piece of software, using their user name and password to gain access to all functionality of the PC they want to log in to. This allows for a lot of comfortable solutions, such as if an employee is sick, they may still have an option to work from home. This allows a flexible company schedule as well because if a user needs to access a document from their home PC, they can essentially log in to their work PC and download t he document. Network Business ApplicationsA second way to access oneÂ’s computer from a different location would be using a dial up service, with this you can basically dial in to access all of their resources available within the server. Using this is a very secure and easy route to go, and allows the user access to files they may desperately need. Another good thing about using a remote connection to access a server is if the user is on a business trip, they have the ability to access all of their much needed documents easily and securely with out much fuss. The explanation between these two pieces of technology is “with dial-up remote access, a remote access client uses the telecommunications infrastructure to create a temporary physical circuit or a virtual circuit to a port on a remote access server. After the physical or virtual circuit is created, the rest of the connection parameters can be negotiated.With virtual private network remote access, a VPN client uses an IP internetwork to create a virtual point-to-point connection with a remote access server acting as the VPN server. After the virtual point-to-point connection is created, the rest of the connection parameters can be negotiated. ”There are many advantages and disadvantages to using a dial up remote connection over VPN. The biggest advantage I have been able to find is, it is easier to set  up and maintain while using VPN makes you set up and maintain individual accounts for both the VPN and the users name and password on the system. Another advantage of dialing up in to the system would be the fact that no matter where the user is all they need to do is plug into a phone jack and they should be able to log in. The disadvantage of this is depending on where the user is long distance charges may apply and it could rank up a pretty penny or two. Another disadvantage is although the system is cheaper in the short term, the system may be more expensive than VPN in the long run. There are also other methods of using VPN. One specific way is certain ISPs (Internet Service Providers) and other third party support companies are assisting in setting up the VPN and supporting it without a great deal of time spent on it by the current department. This may or may not be more cost efficient than setting it up yourself, but it does remove a lot of the headache that VPNs can give due to different errors. There are also many advantages and disadvantages to using a VPN over a dial up system. One of the biggest advantages to this system over a dial up system is in the long run this is a much cheaper system than a dial up system. This system is a little bit quicker than a dial up system as well. This system is cheaper than a dial up system because using a dial up system, long distance fees may apply, with the virtual private network, you do not need to worry about this as the user may call into a local internet service provider to gain access. Any internet connection will gain a user access to the companyÂ’s network through a VPN. Through all of this, there still needs to be security measures put in place to keep unwanted users off of the system while allowing employees or other authorized users access without down time. VPNs can work well with firewalls, all the IT department would need to do is allow the ports to be accessed by the VPN and the user should have full access. All in all, there are two very cost effective solutions at a companyÂ’s finger tips and both are fairly easy to set up. The company needs to decide if they want to save money up front and make it easier so they do not need  to set up multiple accounts per user, or if they would rather have a better solution and save more money down the road. The choice also depends on the amount of users logging in at any given moment. Backup and Disaster RecoverySecurity, back ups and disaster recovery are all important very parts of all networks in todays world. The problem with today is information on how to hack, destroy and program any type of malicious software (or malware) is easily accessible via the Internet and other easy to access sources. There are roughly 1.4 billion people on the Internet or that at least have access to the Internet in the world, which is about 25% of the worlds population. All of these people have extremely easy access to hacking networks, creating malware and destroying any personal or private data a user may have and wish to keep. There is not really any way to stop these people from harming our personal software and data from their side, this is why a user needs to make sure they have security on the users side. There are other things that happen besides people trying to maliciously harm a users files and data. Accidents can happen and destroy data as well. There could be many things that can harm a users data such as a fire, earthquake, power surge or worst case scenario, some sort of electro magnetic pulse (EMP). This is where data back ups and disaster recovery come in nicely. There are many companies that specialize in helping a user or company back up their data and store it off site such as SunGard (mostly used in bigger company settings). There are other ways to store a users data as well. One way is to make a physical copy of everything needed on CDs, DVDs, Flash Drive or some other type of media and store it at a friends house or some other persons house they trust. This keeps a hard copy of all of their data off site just in case something happens and it can now be restored. There are a few other companies as well that offer on line backups. For this a user downloads their software and it automatically backs up to a few different location for redundancy which allows the customer more safety and easier access to all of their files. One of the first steps to a business that wishes to be very secure in all  that they do is to set up a backup and disaster recovery plan to start it all off. Like I stated earlier, there are many way s to do it. If this is a larger company they probably want to hire someone internally to make a physical back up of all the data and send it to an off site company for storage. They should also keep another copy close to them at all times, preferably away from where the physical data lies. They should put it on the opposite side of the building than where the file server is. If anything happens to the servers, they can quickly and easily use their backed up copy of all the data and recover it on to the servers in which they lie. Most companies have 2 or 3 backup units on site for redundancy and this allows that if one of those go down as well there are still a couple others in which they can restore all of the data from. Although this can become a little more expensive than just a regular back up system, sometimes it can be well worth it. Network SecurityAccording to devx.com “the first step in drafting a disaster recovery plan is conducting a thorough risk analysis of your computer systems. List all the possible risks that threaten system uptime and evaluate how imminent they are in your particular IT shop. Anything that can cause a system outage is a threat, from relatively common man made threats like virus attacks and accidental data deletions to more rare natural threats like floods and fires. Determine which of your threats are the most likely to occur and prioritize them using a simple system: rank each threat in two important categories, probability and impact. In each category, rate the risks as low, medium, or high. For example, a small Internet company (less than 50 employees) located in California could rate an earthquake threat as medium probability and high impact, while the threat of utility failure due to a power outage could rate high probability and high impact. So in this companys risk analysis, a power outage would be a higher risk than an earthquake and would therefore be a higher priority in the disaster recovery plan.”Another big part of any security system development is the company (or department) needs to look at their budget and how much they are willing to spend on their system. A company can get a basic security system for their network (including firewall) for fairly cheap and this may do most of what is needed, but larger companies are going to need to spend quite a  bit more money than that of a small company. Most larger companies spend quite a bit because they usually have higher priced clients that they can not afford to lose and all of their data is invaluable to the company. Some companies actually have their own Information System Security employees to monitor the network in case of any type of attack. They also make sure all of the anti-virus and anti-malware softwares are running and updating properly. Lastly, another thing most companies forget about after they have their equipment and software installed is there is more than just the implementation of the hardware and software to save them. They need to make sure everything continues to run and update itself from newer and bigger threats. These companies need to make sure they continually test and check what needs to be done to continually maintain a network that can not be broken in to. There are people out there that can be hired to try and break into a companies network. They get paid and let the company know what needs to be fixed so others can not break into it as well. In conclusion, a company can be nothing or brought to its knees with out its network and servers. There are many things that can cripple a company without the help of man. The only way to avoid these is to have a proper disaster recovery plan and to make sure the network is not vulnerable in any way. References About, Inc. (2004). Network topologies : bus, ring, star, and all the rest. RetrievedOctober 12, 2004, from http://compnetworking.about.com /library/weekly/aa041601a.htmBrain, M. (2004). How stuff works : how wifi works. Retrieved October 12, 2004,from http://computer.howstuffworks.com/wireless-network.htm/printableNetwork Latency. (n.d.). Retrieved April 27, 2008, fromhttp://www.javvin.com/etraffic/network-latency.htmlBroadband Internet. (n.d.). Retrieved April 27, 2008, fromhttp://www.pcworld.idg.com.au/index.php/id;988596323Wide Area Networks.(n.d.). Retrieved April 27, 2008, fromhttp://www.erg.abdn.ac.uk/users/gorry/course/intro-pages/wan.htmlVirtual Private Network. (n.d.).retrieved May 11, 2008, fromhttp://searchsecurity.techtarget.com/sDefinition/0,,sid14_gci213324,00.html#VPN vs. Dial up. (n.d.). Retrieved May 11, 2008, fromhttp://technet2.microsoft.com/windowsserver/en/library/d85d2477-796d-41bd-83fb-17d78fb1cd951033.mspx?mfr=trueHow to Create a Disaster Recovery Plan, RetrievedMay 23, 2008, from http://www.devx.com/security/Article/16390/1954World Internet Usage Statistics, RetrievedMay 23, 2008, from http://www.internetworldstats.com/stats.htm

Saturday, October 26, 2019

Should Torture be Justified in any Case?

Should Torture be Justified in any Case? Jason Poole Date The word torture comes from a Latin root meaning twisted, and first appeared in Rome in 530 AD. 600 years later, Italian and French courts changed from an accusatory system to a judiciary system, as opposed to the Roman courts, where torture was used to extract information (Green). However, the idea of torture in the courtroom was not rested until the 18th century during the Enlightenment period. Voltaire condemned torture profusely in many of his essays, and from the end of the 18th century into the start of the 19th century, nearly every European country had abolished torture in their statutory law (Green). After the adoption of the Geneva Conventions, torture became condemned completely. Recently, the debate of torture has been reestablished with the controversy of waterboarding, brought forth by the American Central Intelligence Agency (CIA) in 2004. It was provoked because the definition of torture has allowed interrogators and lawmakers to interpret it in different ways. The set definition is the infliction of intense physical pain to punish, coerce, or afford sadistic pleasure (Torture). As the definition only mentions physical pain, one could assert that psychological pain, as some argue waterboarding is, does not fall under the restrictions on torture. The debate of whether torture can be defended in any situation is reliant upon whether the life of an innocent takes precedent over the physical and psychological state of a criminal. The argument that torture is able to be justified revolves around utilitarianism, or the idea that an action is for the greater good. Only within recent centuries have attitudes changed against the use of torture. According to a poll done by the Washington Post, 82% of conservatives in the United States believe that torture can be justified in most cases involving national security. However, with the addition of Article 3 in the Geneva Conventions of 1949, the social stigma against torture had been solidified. The UNs standards show that torture can never be justified, and that the interrogator who committed the act should be fully prepared to face the consequences of doing so in court. Non-Governmental Organizations such as Amnesty International and the World Organization Against Torture, are strong ad vocates of this viewpoint. Both press for political action against torture. In the United Kingdom, almost 70% are clearly against torture in all cases (Amnesty). Opinions of respected political analysts, as well as studies of each side, will allow the two arguments in regards of torture to be evaluated and assessed suitably. The perception of temporary pain of a criminal over the perpetual death of an innocent is one found in many arguments of this perspective. It is the thought that the criminal, who has or will do much worse, has a way out of the torture being inflicted upon them in the form of giving up of information that the interrogator needs (Spero). Spero claims that, Certainly, pain is not the equivalent of life itself, so that even saving one life takes precedence over the pain of the terrorist. He supports this statement by arguing that a moral person could not stand by under these circumstances, and that most would put the state of their countrymen above that of the terrorist that threatens their lives. Spero asserts that the happenings at Guantanamo Bay are not torture, but coercion. He doesnt defend the uses of interrogation themselves, but rather compares the enhanced interrogation techniques that the United States uses on terrorists to the permanent defacement used in the Muslim world, as well as the point that the purpose behind the former is for information and the latters is sadism (Spero). However, Spero has a paragraph that shows his bias in this controversy, calling American liberals anti-western and anti-American. He also calls those at the New York Times mentally abnormal. This bias, as well as the fact that he holds no qualifications to defend the use of torture serves to detract from his argument that torture can be justified. In his editorial, Charles Krauthammer cites the possibility of jury nullification in cases where torture occurred, which is usually applied when extenuating circumstances the defendant was under cause the jury to return a verdict that contradicts the facts of the case. The idea that there are specific cases in which jury nullification should be called for is supported by Charles Krauthammer, a known defender of the concept of the ticking time bomb. He asserts that there are two cases in which torture can be justified, those being the aforementioned ticking time bomb scenario, and a situation in which there is a near guarantee that many innocents will be killed. The ticking time bomb is a hypothetical thought experiment that involves the ethics of torture. The experiment first appeared in the 1960s, and poses the question if someone with knowledge of an imminent terrorist attack should be tortured into giving up that information (Lartà ©guy). Krauthammer falls on the consequentialist side of the argument, believing that the torture of the person can be justified, especially if innocent lives are at stake. In his opinion editorial in 2009, he states his viewpoint on torture, and attempts to defend it. However, he fails to discern the difference between interrogation and torture, severely discrediting his argument, starting to defend interrogation instead of torture, causing him to fail in proving his point. Krauthammer also calls his second exception to his no-torture rule an example of Catch-22. As the defenders do not know the information they need to be able to stop an act of terrorism from happening, and cant find that out in time, an interrogator should resort to extremities to deal with the terrorist that acts in extremes (Krauthammer). Krauthammers credibility as the previous Chief Resident in Psychiatry at Massachusetts General Hospital and his Masters Degree in Psychology does help his credibility on the subject of torture, and thus his argument as a wh ole. At this time, there is no one arguing for the removal of laws against torture. John McCain, a prisoner of war in the Vietnamese War and a current Senator of Arizona, believes, I dont believe this scenario requires us to write into law an exception to our treaty and moral obligations that would permit cruel, inhumane and degrading treatment. To carve out legal exemptions to this basic principle of human rights risks opening the door to abuse as a matter of course, rather than a standard violated truly in extremis. This is another example of a case where jury nullification would be a viable solution. Rather, there are those that believe that torture is inescapable, though still morally unjust. One such is Bruce Anderson, a British political columnist and an advocate of torture. He wrote an editorial for The Independent in 2010, arguing that Britain has a duty to torture terrorists. Anderson says that men cannot be angels in the case of torture, and explains that, However repugnant we m ay find torture, there are worse horrors, such as the nuclear devastation of central London, killing hundreds of thousands of people and inflicting irreparable damage on mankinds cultural heritage. He defends this statement by painting torture as the lesser of two evils, and claims that Britain is ensuring their own destruction by not gathering the information needed to prevent a terrorist attack. He also asserts that the best way to garner this information is through torture (Anderson). Anderson continues, floundering for an answer from when he was asked about a hypothetical situation by British liberal Sydney Kentridge about what Anderson would do when a hardened terrorist would not divulge the information needed. His answer was, Torture the wife and children. This answer on how he would break a terrorist shows to be hypocritical of his previous statement. This, and also that he has no specific qualifications on this subject severely discredits his argument. The perception that torture does not work as a means of extracting accurate information is an old principle dating back to the 18th century. It is the idea that if one were to torture for information, at some point the person would say anything for the pain to stop. Rupert Stone asserts that torture is at best ineffective to gather information. To support this, he cites Shane OMara, the author of Why Torture Doesnt Work, sayingtorture can produce false information by harming those areas of the brain associated with memory. An experiment conducted by Charles Morgan in 2006 had soldiers undergo stressful, but typical, means of coercion. At the end of the trial, they exhibited a remarkable deterioration in memory (Stone).   One of his interviewees, Glenn Carle, an interrogator with the CIA comments on the subject, Information obtained under duress is suspect and polluted from the start and harder to verify. He speaks about his experience in interrogating terrorists, and how those who were under stress previously before he tried to interrogate them were more likely to give false information. However, he admitted that he was not sure if it was because of memory impairment or to stop the stressful conditions, which has the potential to weaken his argument. Regardless, he asserts that torture can lead to false confessions (Stone). A letter to Frontline PBS from Michael Nowacki, a Staff Sergeant in the U.S. Army also agrees with the idea of false information. He argues that using false information gathered from previous torturees can cause innocent people to be tortured for information they do not know about. As an interrogator, he found that 95% of the people being put under these conditions were innocent, and that most of these cases came from false statements by informants put under torture (Nowacki). The thought that torture can create propaganda for terrorist groups has recently been spurred by the American Air Force Major under the pseudonym Matthew Alexander. He was one of the lead interrogators tasked with finding the location of the Abu Musab al Zarqawi, who was the head of Al-Qaeda at the time. In 2008, he wrote How to Break a Terrorist, which detailed his accounts of how he managed to garner the information needed. He commented on his belief that highly coercive interrogation techniques have not helped the United States in the past, and how interrogating the informant with confidence-building approaches led him to the location of Zarqawi (Alexander). Alexander claims that by stooping to torture, America would be pushing more people to Al Qaeda, thus being counterproductive. He supports this by explaining that the people he had fought against state that the number one reason they had decided to pick up arms and join Al Qaeda was the abuses at Abu Ghraib and the authorized t orture and abuse at Guantà ¡namo Bay. He asserts that the short term gains of torture would be overshadowed by the long term losses (Alexander). He quotes Alberto Mora in his interview, a General Counsel of the U.S. Navy. Mora comments that main causes of U.S. combat deaths in Iraq due to the recruiting insurgent fighters into combat are Abu Ghraib and Guantà ¡namo. This idea is also supported by John Hutson, a retired Rear Admiral in the U.S. Navy, who asserts in a debate about torture run by   that there was a reason the Nazis surrendered to the Americans, the ones they knew would treat them somewhat fairly, versus the Russians, who unashamedly tortured their people for information in World War 2. He also tries to support the argument by also citing the first Iraqi War: In the first Iraq war, tens of thousands of Iraqis surrendered to us because they knew that they would be treated decently. My friends, theyre not surrendering to us anymore (Hutson). There are large amounts of bias here, not only because he is stating his opinion but also that he is trying to convince the audience of the debate the torture is not necessary to gain information. [1] After assessing the arguments for both positions on the controversy of torture, I could only morally agree with the idea that torture is unable to be justified. It is a practice that is hard to condone, as most enhanced interrogation techniques are close or could be considered torture. Henry Porter, attempting to combat the aforementioned Anderson summarizes the idea, It is preposterous for him to suggest that Elizabethan society has anything to tell societies that come after the enlightenment and the birth of the age of universal rights. Its as stupid as citing the Vikings or Visigoths to excuse behaviour in the 21st century. There are many constrictions on interrogation as well as governments in general to prevent the use of torture; the Eighth Amendment of the U.S. Constitution, the Geneva Conventions,   as well as the Universal Declaration of Human Rights, for example. However, I would like to think myself not naive enough to think that torture will not happen, no matter t he rarity of the cases, as the research of my paper concludes. I maintain the idea that torture is a horrible application, though I have to find myself agreeing with Senator John McCain; that torture should not be a permanent exception to the law, but one violated in extraordinary circumstances, and as Krauthammer said, that a torturer should be fully prepared to face the consequences, no matter the circumstances. However, it is necessary for this topic to be researched much more for the sanctions of under what cases should torture be justified. Overall, the justification of torture is an idea that cannot be applied to all cases. Each detail needs to be thoroughly investigated, and even then, every case has different circumstances that could allow torture to be or prevent torture from being justified. Thus, it is impossible to fully say that torture can or cannot be justified. Works Cited Alexander, Matthew. The American Public has a Right to Know That They Do Not couldillHave to Choose Between Torture and Terror: Six questions for Matthew coulillllAlexander, author of How to Break a Terrorist. Harpers Magazine. 18 December coulillll2008. http://harpers.org/blog/2008/12/the-american-public-has-a-right-to-know-that-they-do-not-have-to-choose-between-torture-and-terror-six-questions-for-matthew-alexander-author-of-_how-to-break-a-terrorist_/ Amnesty poll finds 29% say torture can be justified. British Broadcasting Channel. 13 couldillMay 2014, http://www.bbc.com/news/uk-27387040 Anderson, Bruce. Bruce Anderson: We not only have a right to use torture. We have a couilllllduty. The Independent. 15 February 2010, http://www.independent.co.uk/voices/commentators/bruce-anderson/bruce-anderson-we-not-only-have-a-right-to-use-torture-we-have-a-duty-1899555.html Goldman, Adam. New poll finds majority of Americans think torture was justified after couldil9/11 attacks. Washington Post. 16 December 2014, https://www.washingtonpost.com/world/national-security/new-poll-finds-majority-of-americans-believe-torture-justified-after-911-attacks/2014/12/16/f6ee1208-847c-11e4-9534-f79a23c40e6c_story.html?utm_term=.12533031f512 Green, Camilla. History of Torture. The Justice Campaign, http://thejusticecampaign.org/?page_id=175 Krauthammer, Charles. The Use of Torture and What Nancy Pelosi Knew. Washington couldillPost. 1 May 2009, http://www.washingtonpost.com/wp-dyn/content/article/2009/04/30/AR2009043003108.html Lartà ©guy, Jean. Les Centurions. Penguin Classics, December 1960. **** Nowacki, Michael. Join the Discussion: The Torture Question. Frontline PBS. http://www.pbs.org/wgbh/pages/frontline/torture/talk/ Spero, Aryeh. Its Not Torture and It Is Necessary. Human Events, 16 January 2007, http://humanevents.com/2007/01/16/its-not-torture-and-it-is-necessary/ Roth, Kenneth. Torture: Does it make us safer? Is it ever OK? Human Rights Watch, couldill2005, http://rockyanderson.org/rockycourses/Torture_History_of_Torture019.pdf Stone, Rupert. Science Shows that Torture Doesnt Work and is Counterproductive. couldillNewsweek. 8 May 2016, http://www.newsweek.com/2016/05/20/science-shows-torture-doesnt-work-456854.html Torture: The Definition of Torture. Merriam-Webster, https://www.merriam-webster.com/dictionary/torture http://jaapl.org/content/37/3/332 Word Count: 2812 2734 2657 2622 2362 2286 This is not to say that interrogators that have used torture for information are allowed to be forgiven automatically. There is a general consensus between both perspectives that the inflictor must go to court and be prepared to be punished for his actions, as torture is still against the law. However, the distinction is found in the idea of jury nullification. It occurs when a jury returns a verdict of Not Guilty despite concrete proof or the accepted belief that the defendant has committed the crime they are on trial for. When applied to torture, jury nullification occurs when the extenuating circumstances that the interrogator was placed under allow the act to be justified, and therein lies the controversy. [1]Maybe combine these two paragraphs? Hutson doesnt matter as much as Alexander, and you could do bias for each of them then

Thursday, October 24, 2019

Blood Brain Barrier Essay -- Biology

The brain is permeated by a vast network of tiny blood vessels called capillaries- so tiny and thin that blood cells have to pass through in single file. In the brain alone there are enough capillaries that if you laid them all out end to end they would stretch from Tucson to Tijuana. These capillaries are surrounded by a single layer of cells. That layer of cells forms a barrier between the capillaries and the cells and fluid of the brain. These barrier-forming cells are called "endothelial cells". You can think of "endothelial" as a synonym for "lining" or even just "barrier". When we use the phrase Blood Brain Barrier, (which for obvious reasons we'll refer to as BBB from here on out!), we're talking about all of these endothelial ("barrier") cells collectively. Function of the BBB The cell membranes of the BBB contain transport proteins. If the brain is a nightclub, the transport proteins are bouncers. They decide who gets in, and who gets kicked out. On this website we'll be introducing you to the most important transport proteins- OATP, MDR1, and MDR2. Don't let all the acronyms intimidate you- read carefully and you'll be fine. If the nightclub/bouncer analogy doesn't work for you, you could also think of them as little vacuum pumps and blowers. An extremely detailed view of their actual mechanisms is beyond current knowledge. Importance of the BBB Without the BBB, undesirable molecules could freely diffuse from the capillaries to the fluid that surrounds the brain cells. These undesirable molecules include: TOXINS- poisons taken in from the environment. IONS- that might upset the delicate electrochemical gradients of the cerebral fluid. ACIDS and BASES- that might upset the cerebral ... ...aks down the BBB, so the mice infected with GBS lacking this toxin developed less bacterial meningitis than those infected with the normal GBS. -Doran says: Ââ€Å"These findings demonstrate a novel function of the blood-brain barrier, to act as a sentry that detects the threat of a bacterial pathogen and responds by triggering an immune response to clear the infection. 3. Neuwalt- Researches treatment of brain tumors with chemotherapy, hard because of the BBB (natural defense against chemical transport into the brain) introduction of chemicals to the brain by shrinking endothelial cells that make up the BBB with a concentrated sugar solution that creates gaps in the BBB allowing chemicals to enter (called Blood-Brain Barrier Disruption Therapy) project tenfold to a hundredfold more successful than normal chemotherapy and intra-arterial chemotherapy (Neuwalt, 1998)

Wednesday, October 23, 2019

Determinants of Student’s Academic Performance Essay

It is a positive statement when one says that man of modern society is so advanced in education, both in the science and in technology, but won’t mind nor think about what steps he could make or trace what good he could do for his fellowmen. He does not live in terms of attitudes acceptable in the society where he lives. A man today is more conscious of his own personal upliftment, keeps innocent of knowing his worth which would be undeniably great if he is treading the right way. Everyone has a right in education. This is embodied in Article XIV Section I of the Philippine Constitution 1987: The state shall protect and promote the right of all citizens to quality education to all levels and shall take appropriate steps to make such education accessible to all. Schools, colleges, and universities have no work without student. Students are most essential assets for any educational institute. The social and economic development of the country is directly linked with student academic performance. The students performance place an important role in producing the best quality graduates who will become great leader and manpower for the country thus responsible for the country’s economic and social development. So the parents or guardians must do their responsibilities and roles to give what they need in education. Intelligence is not the only determinant of the academic performance of the student. Academic performance of a student is always associated with the many components of learning environment. Learning and teaching environment ought to implement six functions: inform, communicate, collaborate, produce, scaffold, and manage. The key to success in learning-teaching environment lies on people who use it. Hence, in the instructional system, the teacher is the main factor who can spell the difference between success and failure of a student. Another important determinant, which shouldn’t be neglected, is the family. Family is the primary social system for students for all cultures across the region. Religiosity as an aspect of the family environment is another independent variable possibly influencing academic performance. Higher-achieving students are likely to have the following characteristics: positive feelings about their school experiences, attribute their success in high school to such things as hard work, self-discipline, organization, ability, and high motivation, these characteristics vary from person to person and country to country. STATEMENT OF THE PROBLEM This study determined the related factors to the academic performance of Bachelor of Science in Respiratory Therapy, second year students at Cagayan State University. To attain the aforementioned objective, answer to the following research questions were sought. 1. What is the profile of the BSRT 1st year students in terms of: A. Personal Factors: a. 1. Sex a. 2. Parent’s occupation a. 3. Sibling number a. 4. Physical health a. 5. Student attitude a. 6. Religion or Ethnicity B. School Factors: b. 1. No vision b. 2. Lack of passion b. 3. Lack of personal/work/school/family balance b. 4. Lack of taking advantage of student resources b. 5. Attending the wrong college or university b. 6. Lack of maturity and discipline C. Community Factors: c. 1. School distance from home c. 2. Means of transportation D. Intrinsic Factors: d.1 . Interest d.2 . Ability E. Extrinsic Factors: e.1 Family Factor e.2 Peers F. Aspirations G. Needs A. What are the determinants of the academic performance of the BSRT 1st year students? B. Is there a relationship between the profile and the academic performance of the BSRT 1st year students? C. How do the teachers perceive the academic performance of the BSRT 1st year students? SCOPE AND DELIMINATION OF THE STUDY This research study is centered on the factors related to the academic performance and attitudes of the BSRT 1st year students at Cagayan State University, Andrew’s Campus. The profile of Bachelor of Science in Respiratory Therapy 1st year students in the terms of personal, school and community factors were determined. Likewise, the teacher’s perceptions on the academic performance of the BSRT 1st year students were considered. Furthermore, the relationship between the BSRT 1st year profile and academic performance were also determined. Lastly, the variable that contributes to the variation of the Bachelor of Science in Respiratory Therapy 1st year students’ academic performance was established. SIGNIFICANCE OF THE STUDY It is with optimism that the findings of this study would contribute the development of macro educational system particularly at Cagayan State University in terms of the determinants related to the academic performance of BSRT 1st year students. Furthermore, it is hoped that feedback of data gathered would be used as clues for recommending changes for improvement in fulfilling practices, performance that are relevant and responsive to the demand of our educational system. Moreover, result of this study, would guide the teachers in improving their classroom management, instructional methods and strategies to equip their students the needed preparation for their future careers. Likewise, the parents would be made knowledgeable on determinants of academic performance of their child; therefore, they can suit a proper program of activities for their child for a better performance. It is also hoped that this study shall help the school maintain a harmonious relationship with the community in playing its vital role for the progress and development through people empowerment. Summing up, the findings of this study would contribute to the attainment of educational excellence and the national development goal, the conversion of the Philippines into a newly industrialized country. DEFINITION OF TERMS 1. Ability- CHAPTER II REVIEW OF RELATED LITERATURE Student academic performances are affected due to social, psychological, economic, environmental, and personal factors. The learning environment refers to the whole range of components and activities within which learning happens (Bahr, Hawks, & Wang 1993). A. PERSONAL FACTORS The socio-economic status of students is directly proportional to their scholastic performance. 1. Parents Occupation According to Ruben as cited by Ramiro, (1996) that the effect of low income reflects lack of education or training, physical or mental disability or poor motivation. Students with parents who were both college educated tended to achieve at the highest levels. Income and family size were modestly related to achievement (Ferguson, 1991). Middle class parents tend to be college graduate although some only graduated from high school, and many only reached elementary. Bremberk (1996) found out that an increase in the percentage of parents with college degree and white collar job have a better effect on school performance. Parents’ educational attainment is related to the school achievement of the youth. 2. Sibling Number Children from large families maybe handicapped because they get relatively source of the family’s intellectual resources than those children from smaller families (Draig 1998). 3. Attitudes of the Student Performance and attitudes characteristics are strong determinants of academic achievement as cited by Marcos, (1998). According to Santrock, (1998) when our attitude is based on personal experience our behavior is more likely reflect our attitudes. When we have thought about our attitude towards something and have ready access it, the attitude behavior connection is strengthen. In the words of John Locke, â€Å"The actions of men are the best interpreter of their thoughts†. Ramiro, (1996) mentioned that habits of students are very much related to education. The relationship would contribute something substantial in the academic performance of student in their respective school and homes greatly affect their standing in school. B. SCHOOL FACTORS Education is a continuous process in which every parent aims to give it as a gift to their children for their future. Abracia, (1984) stated that schools was considered as a second home of learners because it is a plays where to know everything; wherein a teacher serves as their parents. 1. No vision According to Wollitkiewics,(1980) Some students do not have a clearly articulated picture of the future they intend to create for themselves. Thus, they may take programs of study without a clear career goal or objective. In essence, they choose the wrong major. 2. Lack of passion In a study conducted by Salinas, (1989) she emphasized that successful students work out of passion, a love for what they want to do, and recognize the importance of the benefit it will bring others as well as themselves. Without passion, study becomes a chore and not a method for achieving clearly defined goals. 3. Lack of personal/work/school/family balance Whatever is going on in a student’s personal life, will inevitably affect what’s going on in school. Whatever is happening in school will affect what’s going on in their personal life. A student needs time to be in class, and appropriate time for study. However, there must be time for family, friends, social activities, and time to just be alone. The key is keeping proper balance (Kalko, Elisabeth, K.V., et al. 2006) 4. Lack of taking advantage of student resources There is really no reason for academic failure. According to San Luis, (2003) every college and university has an academic learning center where students can receive peer and faculty tutoring, without charge. Many students fail to seek help. 5. Attending the wrong college or university Tylan, (1998) found that students accept admission into schools they are not familiar with. Thus, they become depressed with their surroundings. Student must be content with their school, its environment, and resources. This, in turn has a negative effect on their studies. 6. Lack of maturity and discipline Some students are just not disciplined and lack good organizational skills. They often fall under the pressure of their peers. (Corpus, 1999) Rather than using good discretion, they feel compelled to follow others (socially), when they really should be attending to their studies. C. INTRINSIC AND EXTRINSIC FACTORS Intrinsic motivation refers to motivation that is driven by an interest or enjoyment in the task itself. It occurs when people are internally motivated to do something because it either brings them pleasure, they think it is important, or they feel that what they are learning is significant. Students are likely to be intrinsically motivated if they attribute their educational results to factors under their own control, also known as autonomy (http://en.wikipedia .org/wiki/Motivation#intrinsic_and_extrinsic_motivation). Intrinsic means internal or inside of yourself. When you are intrinsically motivated, you enjoy an activity, course or skill development solely for the satisfaction of learning and having fun, and you are determined to be inwardly in order to be competent (http://www.livestrong.com/article/174305-the-difference-between-intrinsic-motivation-extrinsic-motivation). According to Dr. James Gavin, intrinsic motivation is derived from a self-concept, core beliefs, internal need and development opposed to extrinsic motivators which can undermine these motivations. Motives needs to be additive in effect, which means the more reasons you find to motivate yourself to engage in a behavior, the more likely you will continue with and persist in these behaviors. External motivators are typically not additive. Extrinsic motivation, on the other hand, means external or outside of yourself, this type of motivation is everywhere and frequently used within the society throughout your lifetime. When you are motivated to behave, learn, or do based on highly regarded outcome, rather than for fun, development or learning provided within an experience, you are being extrinsically motivated (http://www.livestrong.com/article/174305-the-difference-between-intrinsic-motivation-extrinsic-motivation). 1. Family Probably the strongest influence in our lives is the family we grew up. Our birth order, the personality of our parents, the way we were treated by our siblings, the socio economic status of the family and the place we lived. Besides these tacit influences, our parents taught us all the basics of proper behavior (â€Å"Family influence†.3rd ed. New York. Ronald M. Doctor, Ada P. Kahn 2008). Rollins and Thomas found that high parental control were associated with high achievement. Parents have a crucial role to make sure that every child becomes high achiever. Parental influence has been identified as an important factor affecting student achievement. Philips (1998) also found that parental education and social economic status have an impact on student performance. 2. Influence of Peers Peers groups play a powerful role in shaping identity because the desire to be accepted by one’s peers and â€Å"fit in† with one’s peers often becomes paramount concern for most adolescents. Peer groups are likely to impose negative sanctions upon those who violate what are perceived as established norms of behavior and who attempt to construct identities that deviate significantly from prevailing conceptions of racial and gender identity (http://www.inmotionmagazine.com/er/pntroub1.html). Peers refer to people who are of the same age, educational level, or have the same job or profession. According to Christine Adamec, (2008), a peer group can cause anxieties for an individual because it can arouse feelings of self concept, low self esteem, and other negative attitudes and behavior. CHAPTER III RESEARCH METHODOLOGY This chapter presents the research design, the locale of the study, the respondents and sampling procedure, the instrument and statistical tools used to treat the data collected. RESEARCH DESIGN Since this study will determine the academic performances of BSRT 2nd year students at Cagayan State University, the researchers will use the descriptive correlational method. Descriptive method often involves extensive observation and note-taking. It describes data and characteristics about the population or phenomenon being studied. The correlational method of research will also used to deal with their relationship between and among the input, transformation process and output variables. The researchers used this method since the condition and description of the subjects and variables at the time of the study will determine. LOCALE OF THE STUDY The College of Allied Health and Sciences was located at Cagayan State University, Tuguegarao. This was composed of two courses- Bachelor of Science in Medical Technology and Bachelor of Science in Respiratory Therapy. The researchers will only focus to the 2nd year BSRT students. RESEARCH INSTRUMENT The principal instrument used in collecting data needed was questionnaire. It is composed of structured questions regarding personal factors, school factors, community factors, intrinsic factors, extrinsic factors and students’ academic performance, each item will be provided with possible answer for the choices of the respondents. The academic performance of the students will be derived through documentary analysis.

Tuesday, October 22, 2019

S Dickens, innit - Emphasis

S Dickens, innit S Dickens, innit He began by turning Shakespeare into txt spk. Now its Dickens for da yoof of today. Martin Baum, a father from Bournemouth, has rewritten Dickens in yoof-speak in order he claims to get children interested in reading. Kids today have invented their own language, says Baum. And I use this language to try and engage them. Judge his alleged mission as you will, while you contemplate his opening to Da Tale of Two Turfs: It was da best of times and, not being funny or nuffing, but it was da worst of times, to be honest