Home > Computer science essays > Computer science questions answered

Essay: Computer science questions answered

Essay details and download:

  • Subject area(s): Computer science essays
  • Reading time: 5 minutes
  • Price: Free download
  • Published: 24 February 2022*
  • Last Modified: 15 October 2024
  • File format: Text
  • Words: 1,416 (approx)
  • Number of pages: 6 (approx)

Text preview of this essay:

This page of the essay has 1,416 words.

CS450 EOS
1. a) Personally, I would recommend the adoption of crypto, which is a combination of cryptocurrency and digital currency that will be integrated into future chips as security concerns grow. Perhaps an eFPGA (Embedded Field Programmable Gate Array) core will become the standard, to which any accelerator logic can be mapped. In the case of FPGAs, there are existing architectures proposed in which CPU cores share some memory and a portion of an FPGA, allowing you to dynamically program activities closer to specific cores executing applications that profit from it.
b) There is no single primitive that can be applied to all applications involving hundreds of cores. SIMD (Single Instruction Multiple Data), MIMD (Multiple Instruction Multiple Data), and MISD (Multiple Instruction Single Data) procedures are required in some situations. Machines with hundreds of cores are already available. They all use a different synchronization technique depending on the problem they’re trying to solve. They frequently have many synchronization techniques for different areas of the problem.
c) The essence of what an intelligent entity does is parallel processing information in a parallel way. Humans would not consider a machine “intelligent” if it processed steps in a serial manner since it would take too long, and we would consider it non-intelligent if it took too long.
When AI first began, there was a meeting where computer scientists discussed how to create intelligent machines. To function properly, Artificial Intelligence sometimes necessitates parallel processing of large amounts of data, because processing large amounts of data sequentially takes time.
Because artificial intelligence is most useful to us when it works swiftly, parallel computing and artificial intelligence are becoming increasingly intertwined. We now have the ability to build robust machine learning and artificial intelligence systems because data is becoming more and more available in incredibly large amounts in today’s era.

2. a) Personally, I think that the traditional machines are called no-remote-memory-access machines (NORMA) because each node acts as an autonomous computer having a processor, a local memory and sometimes I/O devices. In this case, all local memories are private and are accessible only to the local processors.
Also, a distributed memory multi computer system consists of multiple computers known as nodes, interconnected by message passing network.

b) In my viewpoint, given the use of parallel processing for multi-objective optimization in situations where the objective functions, constraints, and hence the solutions might change over time, parallel computing has made a significant contribution to socio-economics. These dynamic optimization challenges can be found in a variety of real-world applications with significant socioeconomic implications.
So, a parallel processing program system is one that uses a cluster computing device to run parallel algorithms for scenario calculations. In such economic models, optimizations are applied. This type of software system is used to run multi-scenario calculations in order to come up with a viable development strategy for a certain region. Parallel processing is also useful for simulating a nation’s or world’s economy.

c) Super nodes and client-server roles in internet telephony.
It’s worth noting that client-server is also known as Skype client or peer.
Your computer, which is running the Skype software, is referred to as a Skype client. The super nodes are simply Skype peer nodes that are not behind a restrictive firewall or NAT router (Network Address Translation) and thus have full internet access. If it is not behind a NAT router or blocking firewall, and has enough CPU and bandwidth capacity, any Skype client node can become a super node.
A skype client’s job is to keep it from becoming a super node, and all that’s needed is that it’s behind a NAT router or a restrictive firewall. A skype client cannot make a direct connection to another peer if it is behind a NAT router or firewall.
In this situation, a super node serves as a relaying agent for skype peers who are behind firewalls or NAT routers. Super nodes are linked to Skype peers who are in close proximity to their internet locations.

A communication between Skype clients A and B is shown above, along with a diagrammatic description.

3. a)

b) i) So below, the architecture shows four processors and each of the processors have a cache and all the processors and cache have a main memory and an input/output system at the other side. It is a shared memory multiprocessor which is one of the most important classes of parallel machines.
ii) This is desirable in our modern day because it gives better throughout on multi programming workloads and supports parallel programs.
So here, all the computer systems allow a processor and a set of I/O controller to access a collection of memory modules by some hardware interconnection. In this same diagram, the memory capacity is increased by adding memory modules and I/O capacity is increased by adding devices to I/O controller or by adding an additional I/O controller.
Processing capacity can be increased by waiting for a faster processor to be available or by adding more processors, and all the resources are organized around a central memory bus.
4. a) First of all, what do we understand by VLSI tech?
It is the process of creating an integrated circuit (IC) by combining millions of MOS transistors onto a single chip. It allows a large number of components to be accommodated on a single chip and clock rates to increase. Therefore, more operations can be performed at a time in parallel.
The VLSI technology has aided in the performance of computer systems because nowadays, more and more transistors, gates and circuits can be fitted in the same area. With the reduction of the basic VLSI feature size, clock rate also improves in proportion to it, while the number of transistors grows as the square. The use of many transistors at once (parallelism) can be expected to perform much better than by increasing the clock rate.
Below are the advantages of VLSI technology.
– Reduced size of circuits.
– Increased cost-effectiveness for devices.
– Improved performance in terms of the operating speed of circuits.
– Higher device reliability.
– Requires less power than discrete components.
– Requires less space and promotes miniaturization.
CMOS has become the prevailing technology due to its high speed and packing density coupled with lower power consumption. New technologies have emerged to further increase circuit speed and to reduce design and technology constraints.
Examples are combined bipolar- CMOS (BIC MOS) and CMOS in silicon on insulator (SOI).
b) Parallel computer architecture is a way of arranging all of the resources in a way that maximizes efficiency and programmability while remaining within the constraints of technology and cost at any given time. As a result, parallel computer architecture introduces a new dimension to the production of computer systems by adding increasingly large numbers of processors to optimize performance.
In addition, performance is often achieved by utilizing large numbers of processors which makes it higher than the performance of a single processor at a given point of time.
c) In both astrophysics and oceanography, parallel computing plays a role in the geographical context.
– We tend to understand that parallel computing is utilized to investigate ocean wealth using multi processors with big computational capacity and low power needs when considering applications of parallel computing.
– Originally, ROMs were employed, but now MPI programming methods are used.
– The methods and computing tools developed and used in astrophysics study are referred to as computational astrophysics.
– PIC, PM, and n-body simulators are all key computational astrophysics tools.
5. a) The use of parallel computing in expert systems
Parallel processing is extensively utilized to reduce computational times, according to my understanding. The inference method and problem size must be addressed when applying it to non-numerical situations, such as expert systems. A fault diagnosis expert system, using either a model-based or a rule-based inference, is used to diagnose faults in power systems of various sizes.
b) A server setup for downloading torrent files is depicted in the diagram. Assume peer A needs to download a file from the webpage shrek.torrent. The file is then broken into little pieces and dispersed across multiple systems before being distributed to the person who is downloading it. This is mostly done to spread the load over multiple servers. The tracker saves a copy of the file that is being downloaded. Peers are the people who send and receive data to and from you. Seeders are the people who just provide you with data.

2021-6-3-1622738317

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Computer science questions answered. Available from:<https://www.essaysauce.com/computer-science-essays/computer-science-questions-answered/> [Accessed 19-11-24].

These Computer science essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.