Sunday, March 31, 2013
Lesson 28: Clock Speed
Overview: Hertz is the term to describe 1 cycle per second of the computer process.
(http://www.techterms.com) Amazing Computer & Technology Term In The InternetA clock cycle, or simply a "cycle," is a single electronic pulse of a CPU. During each cycle, a CPU can perform a basic operation such as fetching an instruction, accessing memory, or writing data. Since only simple commands can be performed during each cycle, most CPU processes require multiple clock cycles.
In physics, the frequency of a signal is determined by cycles per second, or "hertz." Similarly, the frequency of a processor is measured in clock cycles per second. Since modern processors can complete millions of clock cycles every second, processor speeds are often measured in megahertz or gigahertz.
Frequency measures the number of times something occurs in a specific amount of time. For example, if someone visits the grocery store twice a week, her shopping frequency is 2 visits per week. While frequency can be used to measure the rate of any action, in technical applications it is typically used to measure wave rates or processing speed. These frequencies often occur multiple times per second and therefore are measured in hertz (Hz) or related units of measurement, such as megahertz or gigahertz.
One megahertz (abbreviated: MHz) is equal to 1,000 kilohertz, or 1,000,000 hertz. It can also be described as one million cycles per second. Megahertz is used to measure wave frequencies, as well as the speed of microprocessors.
Radio waves, which are used for both radio and TV broadcasts, are typically measured in megahertz. For example, FM radio stations broadcast their signals between 88 and 108 MHz. When you tune to 93.7 on your radio, the station is broadcasting at a frequency of 93.7 MHz.
Megahertz is also used to measure processor clock speeds. This measurement indicates how many instruction cycles per second a processor can perform. While the clock speeds of processors in mobile devices and other small electronics are still measured in megahertz, modern computer processors are typically measured in gigahertz.
Abbreviation: MHz.
Clock speed is the rate at which a processor can complete a processing cycle. It is typically measured in megahertz or gigahertz. One megahertz is equal to one million cycles per second, while one gigahertz equals one billion cycles per second. This means a 1.8 GHz processor has twice the clock speed of a 900 MHz processor.However, it is important to note that a 1.8 GHz CPU is not necessarily twice as fast as a 900 MHz CPU. This is because different processors often use different architectures.
For example, one processor may require more clock cycles to complete a multiplication instruction than another processor. If the 1.8 GHz CPU can complete a multiplication instruction in 4 cycles, while the 900 MHz CPU takes 7 cycles, the 1.8 GHz processor will be more than twice as fast as the 900 MHz processor. Conversely, if the 1.8 GHz processor takes more cycles to perform the instruction, it will be less than 2x as fast as the 900 MHz processor.
Other factors, such as a computer's bus speed, cache size, speed of the RAM, and hard drive speed also contribute to the overall performance of the machine. Therefore, while the processor's clock speed is a significant indicator of how fast a computer is, it is not the only factor that matters.
Lesson 27: Data Measurement
BYTES to BITS conversion.
How many megabytes in 1 GB?
MEASUREMENT CHART
(http://www.wu.ece.ufl.edu/links/dataRate/DataMeasurementChart.html)
NOTE: A lowercase "b" is used as an abbreviation for bits, while an uppercase "B" represents bytes. This is an important distinction, since a byte is 8x as large as a bit.
For example, 100 KB (kilobytes) = 800 Kb (kilobits).
How many megabytes in 1 GB?
MEASUREMENT CHART
(http://www.wu.ece.ufl.edu/links/dataRate/DataMeasurementChart.html)
Unit | Value | Size |
---|---|---|
bit (b) | 0 or 1 | 1/8 of a byte |
byte (B) | 8 bits | 1 byte |
kilobyte (KB) | 10001 bytes | 1,000 bytes |
megabyte (MB) | 10002 bytes | 1,000,000 bytes |
gigabyte (GB) | 10003 bytes | 1,000,000,000 bytes |
terabyte (TB) | 10004 bytes | 1,000,000,000,000 bytes |
petabyte (PB) | 10005 bytes | 1,000,000,000,000,000 bytes |
exabyte (EB) | 10006 bytes | 1,000,000,000,000,000,000 bytes |
zettabyte (ZB) | 10007 bytes | 1,000,000,000,000,000,000,000 bytes |
yottabyte (YB) | 10008 bytes | 1,000,000,000,000,000,000,000,000 bytes |
NOTE: A lowercase "b" is used as an abbreviation for bits, while an uppercase "B" represents bytes. This is an important distinction, since a byte is 8x as large as a bit.
For example, 100 KB (kilobytes) = 800 Kb (kilobits).
Lesson 26: Binary Coding
Binary code: ASCII "Ass Kee"
(http://www.telacommunications.com/nutshell/ascii.htm#chart)
ASCII, pronounced "ask-ee" is the acronym for
American Standard Code for Information Interchange. It's a set of characters which,
unlike the characters in word processing documents, allow no special formatting
like different fonts, bold, underlined or italic text. All the characters used
in email messages are ASCII characters and so are all the characters used in
HTML documents. (Web browsers read the ASCII characters between angle brackets,
"<" and ">", to interpret how to format and display HTML documents.)
An "ASCII file" is a data or
text file that contains only characters coded from the standard ASCII character set.
Characters 0 through 127 comprise the Standard ASCII Set and characters
128 to 255 are considered to be in the
Extended ASCII Set.
These codes, however, may not be the same in all computers and files containing these
characters may not display or convert properly by another ASCII program.
Knowing something about ASCII can be helpful.
ASCII files can be used as a common denominator for data conversions. For example,
if program A can't convert its data to the format of program B, but
both programs can input and output ASCII files, the conversion may be possible.
ASCII characters are the ones used to send and receive email. If you're familiar
with email, you already know that formatting like italic type and underlines are
not possible. Email transmissions are limited to ASCII characters and because
of that, graphics files and documents with non-ASCII characters created in
word processers, spreadsheet or database programs must be "ASCII-fied" and
sent as email file attachments. When the files reach
their destination they are "deASCII-fied" (i.e. decoded) and therefore,
reconstructed to restore them for use.
ASCII | Description | ||
0 | NUL | Null | |
1 | SOH | Start of heading | |
2 | STX | Start of text | |
3 | ETX | End of text | |
4 | EOT | End of transmit | |
5 | ENQ | Enquiry |
64 | @ | ||
65 | A | ||
66 | B | ||
67 | C | ||
68 | D | ||
69 | E | ||
70 | F | ||
71 | G | ||
72 | H | ||
73 | I | ||
74 | J | ||
75 | K | ||
76 | L | ||
77 | M | ||
78 | N | ||
79 | O | ||
80 | P | ||
81 | Q | ||
82 | R | ||
83 | S | ||
84 | T | ||
85 | U | ||
86 | V | ||
87 | W | ||
88 | X | ||
89 | Y | ||
90 | Z |
(http://www.telacommunications.com/nutshell/ascii.htm)
Lesson 25: Data Representation
( http://www.willamette.edu/~gorr/classes/cs130/lectures/data_rep.htm)
Data Representation refers to the methods used internally to represent information stored in a computer. Computers store lots of different types of information:
How can a sequence of 0's and 1's represent things as diverse as your photograph, your favorite song, a recent movie, and your term paper?
It all depends on how we interpret the information. Computers use numeric codes to represent all the information they store. These codes are similar to those you may have used as a child to encrypt secret notes: let 1 stand for A, 2 stand for B, etc. With this code, any written message can be represented numerically. The codes used by computers are a bit more sophisticated, and they are based on the binary number system (base two) instead of the more familiar (for the moment, at least!) decimal system. Computers use a variety of different codes. Some are used for numbers, others for text, and still others for sound and graphics.
In ASCII, an "A" is 65," B" is 66, "a" is 97, "b" is 98, and so forth. When you save a file as "plain text", it is stored using ASCII. ASCII format uses 1 byte per character 1 byte gives only 256 (128 standard and 128 non-standard) possible characters The code value for any character can be converted to base 2, so any written message made up of ASCII characters can be converted to a string of 0's and 1's.
Forum :
x86 = 32bit
x64 = 64bit
"As the x86 term became common after the introduction of the 80386, it usually implies a binary compatibility with the 32-bit instruction set of the 80386. This may sometimes be emphasized as x86-32 to distinguish it either from the original 16-bit x86-16 or from the newer 64-bit x86-64 (also called x64). Although most x86-processors used in new personal computers and servers have 64-bit capabilities, to avoid compatibility problems with older computers or systems, the term x86-64 is often used to denote 64-bit software, with the term x86 implying only 32-bit.
Go with the 32bit... It has more compatible drivers and support. Unless you are going to be doing 3D Rendering, Video Editing, or RAW Files in Photoshop, 64bit OSs aren't worth the trouble."
Summary:
Smallest data representation is called BIT.
8 BITS is equal to 1 BYTES
ASCII uses 7 BITS to represent each character.
Data Representation refers to the methods used internally to represent information stored in a computer. Computers store lots of different types of information:
- numbers
- text
- graphics of many varieties (stills, video, animation)
- sound
How can a sequence of 0's and 1's represent things as diverse as your photograph, your favorite song, a recent movie, and your term paper?
It all depends on how we interpret the information. Computers use numeric codes to represent all the information they store. These codes are similar to those you may have used as a child to encrypt secret notes: let 1 stand for A, 2 stand for B, etc. With this code, any written message can be represented numerically. The codes used by computers are a bit more sophisticated, and they are based on the binary number system (base two) instead of the more familiar (for the moment, at least!) decimal system. Computers use a variety of different codes. Some are used for numbers, others for text, and still others for sound and graphics.
THE RELATIONSHIP BETWEEN BITS,BYTES & CHARACTER
- Memory consists of bits (0 or 1)
- a single bit can represent two pieces of information
- bytes (=8 bits)
- a single byte can represent 256 = 2x2x2x2x2x2x2x2 = 28 pieces of information
- words (=2,4, or 8 bytes)
- a 2 byte word can represent 2562 pieces of information (approximately 65 thousand).
- Byte addressable - each byte has its own address.
In ASCII, an "A" is 65," B" is 66, "a" is 97, "b" is 98, and so forth. When you save a file as "plain text", it is stored using ASCII. ASCII format uses 1 byte per character 1 byte gives only 256 (128 standard and 128 non-standard) possible characters The code value for any character can be converted to base 2, so any written message made up of ASCII characters can be converted to a string of 0's and 1's.
Forum :
x86 = 32bit
x64 = 64bit
"As the x86 term became common after the introduction of the 80386, it usually implies a binary compatibility with the 32-bit instruction set of the 80386. This may sometimes be emphasized as x86-32 to distinguish it either from the original 16-bit x86-16 or from the newer 64-bit x86-64 (also called x64). Although most x86-processors used in new personal computers and servers have 64-bit capabilities, to avoid compatibility problems with older computers or systems, the term x86-64 is often used to denote 64-bit software, with the term x86 implying only 32-bit.
Go with the 32bit... It has more compatible drivers and support. Unless you are going to be doing 3D Rendering, Video Editing, or RAW Files in Photoshop, 64bit OSs aren't worth the trouble."
Summary:
Smallest data representation is called BIT.
8 BITS is equal to 1 BYTES
ASCII uses 7 BITS to represent each character.
Lesson 24: Overview of computer system
Keywords:
Computer System is defined as combination of several units or component to process information.
There are INPUT, OUTPUT, PROCESS & STORAGE as the main components in computer system.
In order for this system to function it must involved hardware, software and users.
The computer process is involved ; Fetching-Decoding-Execute-Store.
Bit is the smallest data representation which has only TWO conditions 1
JUST THE FACTS
Date: 5 Apr 2013 Class: 4A
Methods: Lecture, discussion & simple task
Materials: Power point notes
Rating : Good! Student can explain the process "A is pressed and displayed on screen"
Computer System is defined as combination of several units or component to process information.
There are INPUT, OUTPUT, PROCESS & STORAGE as the main components in computer system.
The computer process is involved ; Fetching-Decoding-Execute-Store.
Bit is the smallest data representation which has only TWO conditions 1
JUST THE FACTS
- Computers work by running programs or software. Programs are normally stored on your hard disk and loaded into memory to be executed by the CPU.
- You control the operating system through its user interface. A GUI or graphical user interface uses the manipulation of visual components, while a command line interface requires you to type in specific commands.
- Your computer has two kinds of software: systems software (aka the operating system), which is loaded when your computer starts up, and applications which are programs you use to accompish a particular task.
- Computer programmers create programs by writing source code in a programming language. This source code must be converted to a native machine language that the CPU can execute.
- Your CPU consists of two parts—the Control Unit and the Arithmetic Logic Unit (ALU)— that work together to perform the steps of the machine cycle: fetch, decode, execute, and store.
- Just like numbers, pixels and sounds, your computer stores computer programs as binary numbers. In the end, "bits is bits".
Date: 5 Apr 2013 Class: 4A
Methods: Lecture, discussion & simple task
Materials: Power point notes
Rating : Good! Student can explain the process "A is pressed and displayed on screen"
Subscribe to:
Posts (Atom)