https://www.theoryofcomputation.co/ Science of Computer Mon, 13 Jul 2020 14:45:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://i0.wp.com/www.theoryofcomputation.co/wp-content/uploads/2018/08/cropped-favicon-512x512-2.png?fit=32%2C32&ssl=1 https://www.theoryofcomputation.co/ 32 32 149926143 How to Improve the Performance of Angular Application? https://www.theoryofcomputation.co/how-to-improve-the-performance-of-angular-application/ Mon, 13 Jul 2020 14:41:32 +0000 https://www.theoryofcomputation.co/?p=509 How to Improve the Performance of Angular Application? Angular is one of the most used frameworks for application development. It is especially preferred by developers that need to design a quality dynamic app. The framework follows a modular web development approach, which makes it quite a popular option for web app developers. But, do you...

The post How to Improve the Performance of Angular Application? appeared first on .

]]>
How to Improve the Performance of Angular Application?

Angular is one of the most used frameworks for application development. It is especially preferred by developers that need to design a quality dynamic app. The framework follows a modular web development approach, which makes it quite a popular option for web app developers. But, do you know you could improve the performance of your Angular app and make it even more efficient? Fortunately, yes! You can create a high-quality dynamic app with Angular and improve your app with some effective tricks.

Let’s check some easy and effective ways to boost the performance of your Angular app.

Minimize Change Detections

Angular Framework runs change detection in the component. This change detection will detect all the changes in your data. Despite being one of the fastest dynamic app development platforms, Angular has to work harder in order to monitor all the changes.

You could minimize the work of this Framework by giving it an indication regarding when to detect the changes in the component. By changing the default change detection feature to ChangeDetectionStrategy.OnPush, you can reduce the workload of the framework. The On-push strategy will tell Angular to detect changes when you run the detection manually or there is a change in Input reference.

By default, components in an Angular application undergo change detection with nearly every user interaction. But, Angular allows you take control of this process. You can indicate to Angular if a component subtree is up to date and exclude it from change detection.

The first way to exclude a component subtree from change detection is by setting the `changeDetection` property to `ChangeDetectionStrategy.OnPush` in the @Component decorator. This tells Angular that the component only needs to be checked if an input has changed, and that all of the inputs can be considered immutable.

At the price of a bit more complexity it is possible to increase the amount of control. For example, by injecting the ChangeDetectorRef service, you can inform Angular that a component should be detached from change detection. You can then manually take control of calling `reattach` or `detectChanges()` yourself, giving you full control of when and where a component subtree is checked.

Minimize DOM Manipulations

If you ever need to change the data in Angular (perhaps, because of the API request), you will face a lot of issues. This is because Angular does not monitor the data in your collection. It doesn’t know which items are added and deleted.

By adding the Trackby functionality, you could request Angular to detect the data that has been added and deleted based on the unique identifier. This way Angular will remove the items that changed.

By default, when iterating over a list of objects, Angular will use object identity to determine if items are added, removed, or rearranged. This works well for most situations. However, with the introduction of immutable practices, changes to the list’s content generates new objects.

In turn, ngFor will generate a new collection of DOM elements to be rendered. If the list is long or complex enough, this will increase the time it takes the browser to render updates. To mitigate this issue, it is possible to use trackBy to indicate how a change to an entry is determined.

Angular Observables Tutorial

The post How to Improve the Performance of Angular Application? appeared first on .

]]>
509
Quantum supremacy using a programmable superconducting processor https://www.theoryofcomputation.co/quantum-supremacy-using-a-programmable-superconducting-processor/ Sun, 10 Nov 2019 10:33:54 +0000 https://www.theoryofcomputation.co/?p=444 Quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits to create...

The post Quantum supremacy using a programmable superconducting processor appeared first on .

]]>
Quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor.

A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 253. Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times—our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy or this specific computational task, heralding a much-anticipated computing paradigm.

In the early 1980s, Richard Feynman proposed that a quantum computer would be an effective tool with which to solve problems in physics and chemistry, given that it is exponentially costly to simulate large quantum systems with classical computers.  Realizing Feynman’s vision poses substantial experimental and theoretical challenges. First, can a quantum system be engineered to perform a computation in a large enough computational (Hilbert) space and with a low enough error rate to provide a quantum speedup? Second, can we formulate a problem that is hard for a classical computer but easy for a quantum computer? By computing such a benchmark task on our superconducting qubit processor, we tackle both questions. Our experiment achieves quantum supremacy, a milestone on the path to full-scale quantum computing

This experiment, referred to as a quantum supremacy experiment, provided direction for our team to overcome the many technical challenges inherent in quantum systems engineering to make a computer that is both programmable and powerful. To test the total system performance we selected a sensitive computational benchmark that fails if just a single component of the computer is not good enough.

Quantum supremacy using a programmable superconducting processor
Left: Artist’s rendition of the Sycamore processor mounted in the cryostat. (Full Res Version; Forest Stearns, Google AI Quantum Artist in Residence) Right: Photograph of the Sycamore processor. (Full Res Version; Erik Lucero, Research Scientist and Lead Production Quantum Hardware)

The Experiment
To get a sense of how this benchmark works, imagine enthusiastic quantum computing neophytes visiting our lab in order to run a quantum algorithm on our new processor. They can compose algorithms from a small dictionary of elementary gate operations. Since each gate has a probability of error, our guests would want to limit themselves to a modest sequence with about a thousand total gates. Assuming these programmers have no prior experience, they might create what essentially looks like a random sequence of gates, which one could think of as the “hello world” program for a quantum computer. Because there is no structure in random circuits that classical algorithms can exploit, emulating such quantum circuits typically takes an enormous amount of classical supercomputer effort.

Each run of a random quantum circuit on a quantum computer produces a bitstring, for example 0000101. Owing to quantum interference, some bitstrings are much more likely to occur than others when we repeat the experiment many times. However, finding the most likely bitstrings for a random quantum circuit on a classical computer becomes exponentially more difficult as the number of qubits (width) and number of gate cycles (depth) grow.

quantum supremacy
Process for demonstrating quantum supremacy.

In the experiment, we first ran random simplified circuits from 12 up to 53 qubits, keeping the circuit depth constant. We checked the performance of the quantum computer using classical simulations and compared with a theoretical model. Once we verified that the system was working, we ran random hard circuits with 53 qubits and increasing depth, until reaching the point where classical simulation became infeasible.

 Schrödinger-Feynman algorithm.
Estimate of the equivalent classical computation time assuming 1M CPU cores for quantum supremacy circuits as a function of the number of qubits and number of cycles for the Schrödinger-Feynman algorithm. The star shows the estimated computation time for the largest experimental circuits.

This result is the first experimental challenge against the extended Church-Turing thesis, which states that classical computers can efficiently implement any “reasonable” model of computation. With the first quantum computation that cannot reasonably be emulated on a classical computer, we have opened up a new realm of computing to be explored.

The Sycamore Processor
The quantum supremacy experiment was run on a fully programmable 54-qubit processor named “Sycamore.” It’s comprised of a two-dimensional grid where each qubit is connected to four other qubits. As a consequence, the chip has enough connectivity that the qubit states quickly interact throughout the entire processor, making the overall state impossible to emulate efficiently with a classical computer.

The success of the quantum supremacy experiment was due to our improved two-qubit gates with enhanced parallelism that reliably achieve record performance, even when operating many gates simultaneously. We achieved this performance using a new type of control knob that is able to turn off interactions between neighboring qubits. This greatly reduces the errors in such a multi-connected qubit system. We made further performance gains by optimizing the chip design to lower crosstalk, and by developing new control calibrations that avoid qubit defects.

We designed the circuit in a two-dimensional square grid, with each qubit connected to four other qubits. This architecture is also forward compatible for the implementation of quantum error-correction. We see our 54-qubit Sycamore processor as the first in a series of ever more powerful quantum processors.

System-wide Pauli and measurement errors
Heat map showing single- (e1; crosses) and two-qubit (e2; bars) Pauli errors for all qubits operating simultaneously. The layout shown follows the distribution of the qubits on the processor. (Courtesy of Nature magazine.)

Testing Quantum Physics
To ensure the future utility of quantum computers, we also needed to verify that there are no fundamental roadblocks coming from quantum mechanics. Physics has a long history of testing the limits of theory through experiments, since new phenomena often emerge when one starts to explore new regimes characterized by very different physical parameters. Prior experiments showed that quantum mechanics works as expected up to a state-space dimension of about 1000. Here, we expanded this test to a size of 10 quadrillion and find that everything still works as expected. We also tested fundamental quantum theory by measuring the errors of two-qubit gates and finding that this accurately predicts the benchmarking results of the full quantum supremacy circuits. This shows that there is no unexpected physics that might degrade the performance of our quantum computer. Our experiment therefore provides evidence that more complex quantum computers should work according to theory, and makes us feel confident in continuing our efforts to scale up.

Applications
The Sycamore quantum computer is fully programmable and can run general-purpose quantum algorithms. Since achieving quantum supremacy results last spring, our team has already been working on near-term applications, including quantum physics simulation and quantum chemistry, as well as new applications in generative machine learning, among other areas.

We also now have the first widely useful quantum algorithm for computer science applications: certifiable quantum randomness. Randomness is an important resource in computer science, and quantum randomness is the gold standard, especially if the numbers can be self-checked (certified) to come from a quantum computer. Testing of this algorithm is ongoing, and in the coming months we plan to implement it in a prototype that can provide certifiable random numbers.

What’s Next?
Our team has two main objectives going forward, both towards finding valuable applications in quantum computing. First, in the future we will make our supremacy-class processors available to collaborators and academic researchers, as well as companies that are interested in developing algorithms and searching for applications for today’s NISQ processors. Creative researchers are the most important resource for innovation — now that we have a new computational resource, we hope more researchers will enter the field motivated by trying to invent something useful.

Second, we’re investing in our team and technology to build a fault-tolerant quantum computer as quickly as possible. Such a device promises a number of valuable applications. For example, we can envision quantum computing helping to design new materials — lightweight batteries for cars and airplanes, new catalysts that can produce fertilizer more efficiently (a process that today produces over 2% of the world’s carbon emissions), and more effective medicines. Achieving the necessary computational capabilities will still require years of hard engineering and scientific work. But we see a path clearly now, and we’re eager to move ahead.

More you can find here:

Quantum supremacy using a programmable superconducting processor

Quantum Supremancy – Google AI

Author information

The Google AI Quantum team conceived the experiment. The applications and algorithms team provided the theoretical foundation and the specifics of the algorithm. The hardware team carried out the experiment and collected the data. The data analysis was done jointly with outside collaborators. All authors wrote and revised the manuscript and the Supplementary Information.

Correspondence to John M. Martinis.

The post Quantum supremacy using a programmable superconducting processor appeared first on .

]]>
444
Complexity Theory – Calculates Complexity of Problem https://www.theoryofcomputation.co/complexity-theory-calculates-complexity-of-problem/ Sun, 10 Nov 2019 10:02:08 +0000 https://www.theoryofcomputation.co/?p=439 Complexity theory is a central topic in theoretical computer science. It has direct applications to computability theory and uses computation models such as Turing machines to help test complexity. Complexity theory helps computer scientists relate and group problems together into complexity classes. Sometimes, if one problem can be solved, it opens a way to solve...

The post Complexity Theory – Calculates Complexity of Problem appeared first on .

]]>
Complexity theory is a central topic in theoretical computer science. It has direct applications to computability theory and uses computation models such as Turing machines to help test complexity.

Complexity theory helps computer scientists relate and group problems together into complexity classes. Sometimes, if one problem can be solved, it opens a way to solve other problems in its complexity class. Complexity helps determine the difficulty of a problem, often measured by how much time and space (memory) it takes to solve a particular problem. For example, some problems can be solved in polynomial amounts of time and others take exponential amounts of time, with respect to the input size.

Growth Rates of Functions

The complexity of computational problems can be discussed by choosing a specific abstract machine as a model of computation and considering how much time and/or space machines of that type require for the solutions.In order to compare two problems it is necessary to look at instances of different sizes. Using the criterion of runtime, for example, the most common approach is to compare the growth rates of two runtimes, each viewed as a function of the instance size.

Definition : Notation for comparing growth rates

Suppose ƒ1, ƒ2: Ν → Ν are partial functions, and each is defined at all but a finite number of points. We write,

ƒ1(n)=Ο(ƒ2(n))

or simply ƒ1=Ο(ƒ2), if there are constants and n0 so that for every n ≥ n0, ƒ1(n) and ƒ2(n) are defined and ƒ1(n) ≤ Cƒ2(n). We write

ƒ1(n)=Θ2(n))

to mean that ƒ1=Ο(ƒ2) and ƒ2=Ο(ƒ1). Finally,

ƒ1(n)=ο(ƒ2(n))

or ƒ1=ο2), means that for every positive constant C, there is a constant n0 sot that for every n ≥ n0, ƒ1(n) ≤ Cƒ2(n).

The statements ƒ1=Ο(ƒ2), ƒ1(n)=Θ(ƒ2) and ƒ1=ο(ƒ2are read “ƒ1 is big-oh of ƒ2“, “ƒ1 is big-theta of ƒ2” and “ƒ1 is little-oh of ƒ2“, respectively.

All these  statements can be rephrased in terms of the ratio  ƒ1(n)/ƒ2(n), provided that ƒ2(n) is eventually grater than 0. Saying that ƒ1=ο2means that the limit of this ratio as approaches infinity is 0; the statement ƒ1=Ο(ƒ2) means only the ratio is bounded. If ƒ12), and both functions are eventually nonzero,, then both the ratios ƒ12 and ƒ21 are bounded, which is the same as saying that the ratio ƒ12 must stay between two fixed positive values ( or is “approximately constant”).

If the statement ƒ1=Ο(ƒ2fails, we write ƒ1 ≠ Ο(ƒ2), and similarly for the other two. Saying that ƒ1 ≠ Ο(ƒ2) means that it is impossible to find a constant C so that ƒ1(n) ≤ Cƒ2(n) for all sufficiently large n; in other words, that the ratio of ƒ1(n)/ƒ2(n) is unbounded. This means that although the ratio ƒ1(n)/ƒ2(n) may not be large for all large values of n, it is large for infinitely many values of n.

Here we can say both in theory and in practice, complexity theory helps computer scientists determine the limits of what computers can and cannot do.

 

The post Complexity Theory – Calculates Complexity of Problem appeared first on .

]]>
439
What is Blockchain Technology? https://www.theoryofcomputation.co/what-is-blockchain-technology/ Wed, 16 Oct 2019 14:48:12 +0000 https://www.theoryofcomputation.co/?p=416 What is Blockchain Technology? The blockchain is an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions but virtually everything of value.” – Don & Alex Tapscott, authors Blockchain Revolution (2016). With a blockchain, many people can write entries into a record of information, and a community of...

The post What is Blockchain Technology? appeared first on .

]]>
What is Blockchain Technology?

Blockchain Technology

The blockchain is an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions but virtually everything of value.” – Don & Alex Tapscott, authors Blockchain Revolution (2016).

With a blockchain, many people can write entries into a record of information, and a community of users can control how the record of information is amended and updated. Likewise, Wikipedia entries are not the product of a single publisher. No one person controls the information.

Descending to ground level, however, the differences that make blockchain technology unique become more clear. While both run on distributed networks (the internet), Wikipedia is built into the World Wide Web (WWW) using a client-server network model.

A user (client) with permissions associated with its account is able to change Wikipedia entries stored on a centralized server.

Whenever a user accesses the Wikipedia page, they will get the updated version of the ‘master copy’ of the Wikipedia entry. Control of the database remains with Wikipedia administrators allowing for access and permissions to be maintained by a central authority.

Is Blockchain Technology the New Internet?

The blockchain is an undeniably ingenious invention – the brainchild of a person or group of people known by the pseudonym, Satoshi Nakamoto. But since then, it has evolved into something greater, and the main question every single person is asking is: What is Blockchain?

By allowing digital information to be distributed but not copied, blockchain technology created the backbone of a new type of internet. Originally devised for the digital currency, Bitcoin, (Buy Bitcoin) the tech community has now found other potential uses for the technology.

Blockchain Transaction Cycle

A blockchain is, in the simplest of terms, a time-stamped series of immutable record of data that is managed by cluster of computers not owned by any single entity. Each of these blocks of data (i.e. block) are secured and bound to each other using cryptographic principles (i.e. chain).

So, what is so special about it and why are we saying that it has industry disrupting capabilities?

The blockchain network has no central authority — it is the very definition of a democratized system. Since it is a shared and immutable ledger, the information in it is open for anyone and everyone to see. Hence, anything that is built on the blockchain is by its very nature transparent and everyone involved is accountable for their actions.

Blockchain Explained

A blockchain carries no transaction cost. (An infrastructure cost yes, but no transaction cost.) The blockchain is a simple yet ingenious way of passing information from A to B in a fully automated and safe manner. One party to a transaction initiates the process by creating a block. This block is verified by thousands, perhaps millions of computers distributed around the net. The verified block is added to a chain, which is stored across the net, creating not just a unique record, but a unique record with a unique history. Falsifying a single record would mean falsifying the entire chain in millions of instances. That is virtually impossible. Bitcoin uses this model for monetary transactions, but it can be deployed in many others ways.

Think of a railway company. We buy tickets on an app or the web. The credit card company takes a cut for processing the transaction. With blockchain, not only can the railway operator save on credit card processing fees, it can move the entire ticketing process to the blockchain. The two parties in the transaction are the railway company and the passenger. The ticket is a block, which will be added to a ticket blockchain. Just as a monetary transaction on blockchain is a unique, independently verifiable and unfalsifiable record (like Bitcoin), so can your ticket be. Incidentally, the final ticket blockchain is also a record of all transactions for, say, a certain train route, or even the entire train network, comprising every ticket ever sold, every journey ever taken.

But the key here is this: it’s free. Not only can the blockchain transfer and store money, but it can also replace all processes and business models which rely on charging a small fee for a transaction. Or any other transaction between two parties.

Even recent entrants like Uber and AirBnB are threatened by blockchain technology. All you need to do is encode the transactional information for a car ride or an overnight stay, and again you have a perfectly safe way that disrupts the business model of the companies which have just begun to challenge the traditional economy. We are not just cutting out the fee-processing middle man, we are also eliminating the need for the match-making platform.

Because blockchain transactions are free, you can charge minuscule amounts, say 1/100 of a cent for a video view or article read. Why should I pay The Economist or National Geographic an annual subscription fee if I can pay per article on Facebook or my favorite chat app. Again, remember that blockchain transactions carry no transaction cost. You can charge for anything in any amount without worrying about third parties cutting into your profits.

Wikipedia’s digital backbone is similar to the highly protected and centralized databases that governments or banks or insurance companies keep today. Control of centralized databases rests with their owners, including the management of updates, access and protecting against cyber-threats.

The distributed database created by blockchain technology has a fundamentally different digital backbone. This is also the most distinct and important feature of blockchain technology.

Wikipedia’s ‘master copy’ is edited on a server and all users see the new version. In the case of a blockchain, every node in the network is coming to the same conclusion, each updating the record independently, with the most popular record becoming the de-facto official record in lieu of there being a master copy.

How Does Blockchain Work?

Picture a spreadsheet that is duplicated thousands of times across a network of computers. Then imagine that this network is designed to regularly update this spreadsheet and you have a basic understanding of the blockchain.

Information held on a blockchain exists as a shared — and continually reconciled — database. This is a way of using the network that has obvious benefits. The blockchain database isn’t stored in any single location, meaning the records it keeps are truly public and easily verifiable. No centralized version of this information exists for a hacker to corrupt. Hosted by millions of computers simultaneously, its data is accessible to anyone on the internet.

To go in deeper with the Google spreadsheet analogy, I would like you to read this piece from a blockchain specialist.

The reason why the blockchain has gained so much admiration is that:

  • It is not owned by a single entity, hence it is decentralized
  • The data is cryptographically stored inside
  • The blockchain is immutable, so no one can tamper with the data that is inside the blockchain
  • The blockchain is transparent so one can track the data if they want to

The Three Pillars of Blockchain Technology

The three main properties of Blockchain Technology which has helped it gain widespread acclaim are as follows:

  • Decentralization
  • Transparency
  • Immutability

Pillar #1: Decentralization

Before Bitcoin and BitTorrent came along, we were more used to centralized services. The idea is very simple. You have a centralized entity which stored all the data and you’d have to interact solely with this entity to get whatever information you required.

Another example of a centralized system is banks. They store all your money, and the only way that you can pay someone is by going through the bank.

The traditional client-server model is a perfect example of this:

Client Server Architecture

When you google search for something, you send a query to the server who then gets back at you with the relevant information. That is simple client-server.

Now, centralized systems have treated us well for many years, however, they have several vulnerabilities.

  • Firstly, because they are centralized, all the data is stored in one spot. This makes them easy target spots for potential hackers.
  • If the centralized system were to go through a software upgrade, it would halt the entire system
  • What if the centralized entity somehow shut down for whatever reason? That way nobody will be able to access the information that it possesses
  • Worst case scenario, what if this entity gets corrupted and malicious? If that happens then all the data that is inside the blockchain will be compromised.

So, what happens if we just take this centralized entity away?

In a decentralized system, the information is not stored by one single entity. In fact, everyone in the network owns the information.

In a decentralized network, if you wanted to interact with your friend then you can do so directly without going through a third party. That was the main ideology behind Bitcoins. You and only you alone are in charge of your money. You can send your money to anyone you want without having to go through a bank.

Cryptocurrency

Centralized and Decentralized Network Architecture

Pillar #2: Transparency

One of the most interesting and misunderstood concepts in blockchain technology is “transparency.” Some people say that blockchain gives you privacy while some say that it is transparent. Why do you think that happens?

Well… a person’s identity is hidden via complex cryptography and represented only by their public address. So, if you were to look up a person’s transaction history, you will not see “Bob sent 1 BTC” instead you will see “1MF1bhsFLkBzzz9vpFYEmvwT2TbyCt7NZJ sent 1 BTC”.

The following snapshot of Ethereum transactions will show you what we mean:

snapshot of Ethereum transactions

So, while the person’s real identity is secure, you will still see all the transactions that were done by their public address. This level of transparency has never existed before within a financial system. It adds that extra, and much needed, level of accountability which is required by some of these biggest institutions.

Speaking purely from the point of view of cryptocurrency, if you know the public address of one of these big companies, you can simply pop it in an explorer and look at all the transactions that they have engaged in. This forces them to be honest, something that they have never had to deal with before.

However, that’s not the best use-case. We are pretty sure that most of these companies won’t transact using cryptocurrencies, and even if they do, they won’t do ALL their transactions using cryptocurrencies. However, what if the blockchain technology was integrated…say in their supply chain?

You can see why something like this can be very helpful for the finance industry right?

Pillar #3: Immutability

Immutability, in the context of the blockchain, means that once something has been entered into the blockchain, it cannot be tampered with.

Can you imagine how valuable this will be for financial institutes?

Imagine how many embezzlement cases can be nipped in the bud if people know that they can’t “work the books” and fiddle around with company accounts.

The reason why the blockchain gets this property is that of cryptographic hash function.

In simple terms, hashing means taking an input string of any length and giving out an output of a fixed length. In the context of cryptocurrencies like bitcoin, the transactions are taken as an input and run through a hashing algorithm (bitcoin uses SHA-256) which gives an output of a fixed length.

Let’s see how the hashing process works. We are going to put in certain inputs. For this exercise, we are going to use the SHA-256 (Secure Hashing Algorithm 256).

Secure Hashing Algorithm 256 Example Image

As you can see, in the case of SHA-256, no matter how big or small your input is, the output will always have a fixed 256-bits length. This becomes critical when you are dealing with a huge amount of data and transactions. So basically, instead of remembering the input data which could be huge, you can just remember the hash and keep track.

A cryptographic hash function is a special class of hash functions which has various properties making it ideal for cryptography. There are certain properties that a cryptographic hash function needs to have in order to be considered secure. You can read about those in detail in our guide on hashing.

There is just one property that we want you to focus on today. It is called the “Avalanche Effect.”

What does that mean?

Even if you make a small change in your input, the changes that will be reflected in the hash will be huge. Let’s test it out using SHA-256:

Avalanche Effect

You see that? Even though you just changed the case of the first alphabet of the input, look at how much that has affected the output hash. Now, let’s go back to our previous point when we were looking at blockchain architecture. What we said was:

The blockchain is a linked list which contains data and a hash pointer which points to its previous block, hence creating the chain. What is a hash pointer? A hash pointer is similar to a pointer, but instead of just containing the address of the previous block it also contains the hash of the data inside the previous block.

This one small tweak is what makes blockchains so amazingly reliable and trailblazing.

Imagine this for a second, a hacker attacks block 3 and tries to change the data. Because of the properties of hash functions, a slight change in data will change the hash drastically. This means that any slight changes made in block 3, will change the hash which is stored in block 2, now that in turn will change the data and the hash of block 2 which will result in changes in block 1 and so on and so forth. This will completely change the chain, which is impossible. This is exactly how blockchains attain immutability.

Courtesy & Reference: What is Block Chain Technology

 

The post What is Blockchain Technology? appeared first on .

]]>
416
Every regular expression describes regular language https://www.theoryofcomputation.co/regular-expression-describes-regular-language/ Sat, 27 Jul 2019 08:51:39 +0000 https://www.theoryofcomputation.co/?p=390 Every regular expression describes regular language, let R be an arbitrary regular expression over the alphabet Σ. We will prove that the language described by R is a regular language. The proof is by induction on the structure of R. The first base case of induction: Assume that R = ε.  The  R describes the language of {ε}. In order to prove that this...

The post Every regular expression describes regular language appeared first on .

]]>
Every regular expression describes regular language, let R be an arbitrary regular expression over the alphabet Σ. We will prove that the language described by R is a regular language. The proof is by induction on the structure of R.

The first base case of induction: Assume that R = ε.  The describes the language of {ε}. In order to prove that this language is regular, it suffices, by the theorem which says,

Theorem 1:  Let A be a language. Then A is regular if and only if there exists a nondeterministic finite automaton that accepts A.

thus, let construct the NFA M = (Q, Σ, δ, q, F) that accepts this language. This NFA is obtained by defining Q={q}, q is the start state, F = {q}, and δ(q,a) = ε,  for all a ∈ Σε . The figure below gives the state diagram of M:

Show the start and final state of NFA

The second base case:Assume that R= ε. The describes the language of {ε}. In order to prove that this language is regular, we know , by theorem 1, which state that if language is regular then it should be accepted by NFA.

So, let construct the NFA M = (Q, Σ, δ, q, F) that accepts this language. This NFA is obtained by defining Q={q}, q is the start state, F = θ, means final state not exist, and δ(q,a) = θ,  for all a ∈ Σε . The figure below gives the state diagram of M:

Start state of Non Deterministic Finite Automata

The third base case: Let a ∈ Σ and assume that R = a. The describes the language of {a}. In order to prove that this language is regular, we know , by theorem 1, which state that if language is regular then it should be accepted by NFA.

So, let construct the NFA M = (Q, Σ, δ, q1, F) that accepts this language. This NFA is obtained by defining Q={q1, q2}, q1 is the start state, F = {q2},  and

δ(q1,a) ={q2},

δ(q1,b) = θ for all b ∈ Σε \ {a}

δ(q1,b) = θ for all b ∈ Σε

The figure below gives the state diagram of M:

NFA state diagram with input

The first case of the induction step: Assume that R = R1 ∪ R2, where R1 and R2 are regular expressions. Let L1 and L2 be the languages described by R1 and R2, respectively, and assume that L1 and L2 are regular. Then R describes the language L1 ∪ L2, which, by,

Theorem 2: The set of regular languages is closed under the union operation, i.e., if A1 and A2 are regular languages over the same alphabet Σ, then A1 ∪ A2 is also a regular language.

The second case of the induction step: Assume that R = R1 ∪ R2, where R1 and R2 are regular expressions. Let L1 and L2 be the languages described by R1 and R2, respectively, and assume that L1 and L2 are regular. Then R
describes the language L1 ∪ L2, which, by Theorem 3, is regular.

Theorem 3: The set of regular languages is closed under the concatenation operation, i.e., if A1 and A2  are regular languages over the same alphabet Σ , then A1A2 is also a regular language.

The third case of the induction step: Assume that R = (R1)*, where R1 is a regular expression. Let L1 be the language described by R1 and assume that L1 is regular. Then R describes the language (L1)*, which, by Theorem 4, is regular.

Theorem 4: The set of regular languages is closed under the star (Kleene) operation, i.e., if A is a regular language, then A* is  also a regular language.

This concludes the proof of the claim that every regular expression describes a regular language.

Read: Regular Language in Automata Thoery

The post Every regular expression describes regular language appeared first on .

]]>
390
Turing Machine Definition https://www.theoryofcomputation.co/turing-machine/ Sat, 22 Dec 2018 19:19:28 +0000 https://www.theoryofcomputation.in/?p=314 Definition of a Turing Machine We start with an informal description of a Turing Machine. Such a machine consists of the following: There are k tapes , for some fixed k ≥ 1. Each tape is divided into cells, and is infinite both to the left and to the right. Each cell stores a symbol belonging to a finite set Γ ,...

The post Turing Machine Definition appeared first on .

]]>
Definition of a Turing Machine

We start with an informal description of a Turing Machine. Such a machine consists of the following:

  1. There are k tapes , for some fixed k ≥ 1. Each tape is divided into cells, and is infinite both to the left and to the right. Each cell stores a symbol belonging to a finite set Γ , which is called the tape alphabet. The tape alphabet contains the blank symbol Δ. If a cell contains Δ , then this means that the cell is actually empty.
    Turing machine tapes 2
    A Turing machine with k = 2 tapes
  2. Each tape has  a tape head which can move along the tape, one cell per move. It can also read the cell it currently scans and replace the symbol in this cell by another symbol.
  3. There is the state control, which can be any in any one of a finite number of states. The finite set of states is denoted by Q. The set Q contains three special states: a start state, an accept state, and a reject state.

The Turing machine performs a sequence of computation steps. In one such steps, it does the following:

  1. Immediately before the computation step, the Turing machine is in a state r of Q, and each of the k tape heads is on a certain cell.
  2. Depending on the current state r and the k symbols that are read by the tape heads, 
    1. the Turing machine switches to a state r’ of Q (which may be equal to r)
    2. each tape head writes a symbol of Γ in the cell it is currently scanning (this symbol may be equal to the symbol currently stored in the cell), and
    3. each tape head either moves one cell to the left, moves one cell to the right, or stays at the current cell.

We now give a format definition of a deterministic Turing machine.

Definition: A deterministic Turing machine is a 7-tuple

M = (Σ, Γ, Q, δ, q, qaccept, qreject),

where

  1. Σ is a finite set, called the input alphabet; the blank symbol Δ is not contained in Σ, 
  2. Γ is a finite set, called the tape alphabet; this alphabet contains the blank symbol Δ, and Σ ⊆ Γ,
  3. Q is a finite set, whose elements are called states, 
  4. q is an element of Q; it is called the state state,
  5. qaccept is an element of Q; it is called the accept state,
  6. qreject is an element of Q; it is called the reject state,
  7. δ is called the transition function, which is a function

δ: Q x Γk x {L, R, N}k.

The transition function δ is basically the “program” of the Turing machine. This function tells us what the machine can do in “one computation step”: Let r ∈ Q, and let a1,a2,…..,ak ∈ Γ. Furthermore, let r’ ∈ Q, a’1,a’2,a’3,….,a’k ∈ Γ, and σ1, σ23,….,σk ∈ {L,R,N} be such that

δ(r,a1,a2,…..,a) = (r’, a’1,a’2,a’3,….,a’k ,σ1, σ23,….,σk ).

This transition means that if

  • the Turing machine is in state r, and
  • the head of the i-th tape reads the symbol ai, 1 ≤ i ≤ k,

then

  • the Turing machine switches to state r’,
  • the head of the i-th tape replaces the scanned symbol ai by the symbol a’i, 1 ≤ i ≤ k, and 
  • the head of the i-th tape moves according to σi, 1 ≤ i ≤ k: if σi = L, then the tape head moves one cell to the left; if σi = N, then the tape head does not move.

We will write the computation step in the form of the instruction 

ra1a2…..a→ r’a’1a’2a’3….a’kσ1σ2σ3….σk

We now specify the computation of the Turing Machine

M = (Σ, Γ, Q, δ, q, qaccept, qreject).

Like us: Theory of Computation

The post Turing Machine Definition appeared first on .

]]>
314
What is Edge Computing? https://www.theoryofcomputation.co/what-is-edge-computing/ Sun, 25 Nov 2018 07:57:04 +0000 https://www.theoryofcomputation.in/?p=299 What is Edge Computing? Edge computing is the practice of processing data near the edge of your network, where the data is being generated, instead of in a centralized data-processing warehouse. Edge computing definition Edge computing is a distributed, open IT architecture that features decentralised processing power, enabling mobile computing and Internet of Things (IoT) technologies. In...

The post What is Edge Computing? appeared first on .

]]>
What is Edge Computing?

Edge computing is the practice of processing data near the edge of your network, where the data is being generated, instead of in a centralized data-processing warehouse.

Edge computing definition

Edge computing is a distributed, open IT architecture that features decentralised processing power, enabling mobile computing and Internet of Things (IoT) technologies. In edge computing, data is processed by the device itself or by a local computer or server, rather than being transmitted to a data centre.

Edge computing is a “mesh network of micro data centers that process or store critical data locally and push all received data to a central data center or cloud storage repository, in a footprint of less than 100 square feet,” according to research firm IDC.

It is typically referred to in IoT use cases, where edge devices would collect data – sometimes massive amounts of it – and send it all to a data center or cloud for processing. Edge computing triages the data locally so some of it is processed locally, reducing the backhaul traffic to the central repository.

Typically, this is done by the IoT devices transferring the data to a local device that includes compute, storage and network connectivity in a small form factor. Data is processed at the edge, and all or a portion of it is sent to the central processing or storage repository in a corporate data center, co-location facility or IaaS cloud.

edge computing

 

Why does edge computing matter?

Edge computing deployments are ideal in a variety of circumstances. One is when IoT devices have poor connectivity and it’s not efficient for IoT devices to be constantly connected to a central cloud.

Other use cases have to do with latency-sensitive processing of information. Edge computing reduces latency because data does not have to traverse over a network to a data center or cloud for processing. This is ideal for situations where latencies of milliseconds can be untenable, such as in financial services or manufacturing.

Here’s an example of an edge computing deployment: An oil rig in the ocean that has thousands of sensors producing large amounts of data, most of which could be inconsequential; perhaps it is data that confirms systems are working properly.

That data doesn’t necessarily need to be sent over a network as soon as its produced, so instead the local edge computing system compiles the data and sends daily reports to a central data center or cloud for long-term storage. By only sending important data over the network, the edge computing system reduces the data traversing the network.

Another use case for edge computing has been the buildout of next-gen 5G cellular networks by telecommunication companies. Kelly Quinn, research manager at IDC who studies edge computing, predicts that as telecom providers build 5G into their wireless networks they will increasingly add micro-data centers that are either integrated into or located adjacent to 5G towers. Business customers would be able to own or rent space in these micro-data centers to do edge computing, then have direct access to a gateway into the telecom provider’s broader network, which could connect to a public IaaS cloud provider.

Edge computing security

There are two sides of the edge computing security coin. Some argue that security is theoretically better in an edge computing environment because data is not traveling over a network, and it’s staying closer to where it was created. The less data in a corporate data center or cloud environment, the less data there is to be vulnerable if one of those environments is comprised.

The flip side of that is some believe edge computing is inherently less secure because the edge devices themselves can be more vulnerable. In designing any edge or fog computing deployment, therefore, security must be a paramount. Data encryption, access control and use of virtual private network tunneling are important elements in protecting edge computing systems.

Edge computing terms and definitions

Like most technology areas, edge computing has its own lexicon. Here are brief definitions of some of the more commonly used terms

  • Edge devices: These can be any device that produces data. These could be sensors, industrial machines or other devices that produce or collect data.
  • Edge: What the edge is depends on the use case. In a telecommunications field, perhaps the edge is a cell phone or maybe it’s a cell tower. In an automotive scenario, the edge of the network could be a car. In manufacturing, it could be a machine on a shop floor; in enterprise IT, the edge could be a laptop.
  • Edge gateway: A gateway is the buffer between where edge computing processing is done and the broader fog network. The gateway is the window into the larger environment beyond the edge of the network.
  • Fat client: Software that can do some data processing in edge devices. This is opposed to a thin client, which would merely transfer data.
  • Edge computing equipment: Edge computing uses a range of existing and new equipment. Many devices, sensors and machines can be outfitted to work in an edge computing environment by simply making them Internet-accessible. Cisco and other hardware vendors have a line of ruggedized network equipment that has hardened exteriors meant to be used in field environments. A range of compute servers, converged systems and even storage-based hardware systems like Amazon Web Service’s Snowball can be used in edge computing deployments.
  • Mobile edge computing: This refers to the buildout of edge computing systems in telecommunications systems, particularly 5G scenarios.

Edge vs. Fog computing

As the edge computing market takes shape, there’s an important term related to edge that is catching on: fog computing.

Fog refers to the network connections between edge devices and the cloud. Edge, on the other hand, refers more specifically to the computational processes being done close to the edge devices. So, fog includes edge computing, but fog would also incorporate the network needed to get processed data to its final destination.

Backers of the OpenFog Consortium, an organization headed by Cisco, Intel, Microsoft, Dell EMC and academic institutions like Princeton and Purdue universities, are developing reference architectures for fog and edge computing deployments.

Some have predicted that edge computing could displace the cloud. But Mung Chaing, dean of Purdue University’s School of Engineering and co-chair of the OpenFog Consortium, believes that no single computing domain will dominate; rather there will be a continuum. Edge and fog computing are useful when real-time analysis of field data is required

Like us on Facebook

The post What is Edge Computing? appeared first on .

]]>
299
Proof by Induction – Mathematical Preliminaries Part 4 https://www.theoryofcomputation.co/proof-by-induction/ Sun, 28 Oct 2018 06:35:45 +0000 https://www.theoryofcomputation.in/?p=281 A proof by induction is the powerful and important technique for proving theorems, in which every step must be justified. For each positive integer n, let P(n) be a mathematical statement that depends on n. Assume we wish to prove that P(n) is true for all positive integers n.A proof by induction of such a statement is carried out as follows: Basis:...

The post Proof by Induction – Mathematical Preliminaries Part 4 appeared first on .

]]>
A proof by induction is the powerful and important technique for proving theorems, in which every step must be justified.

proof by induction

For each positive integer n, let P(n) be a mathematical statement that depends on n. Assume we wish to prove that P(n) is true for all positive integers n.A proof by induction of such a statement is carried out as follows:

Basis: Prove that P(1) is true.

Induction Step: Prove that for all n ≥ 1, the following holds: If P(n) is true, then P(n+1) is also true.

In the induction step, we choose an arbitrary integer n ≥ 1 and assume that P(n) is true; this is called the induction hypothesis. Then we prove that P(n+1) is also true.

Theorem 1: For all positive integers n, we have 1 + 2 + 3 + ….+ n = n(n+1)/2.

Proof: We start with the basis of the induction. If n = 1, then the left-hand side is equal to 1, and so is the right-hand side. So the theorem is true for n = 1.

For induction step, let n ≥ 1 and assume that the theorem is true for n, i.e., assume that 1 + 2 + 3 + …. +n =  n (n + 1) / 2

So what induction is saying , it should be true for n + 1 which means:

1 + 2 + 3 + …. + (n + 1)  = (n + 1)((n+1) + 1) / 2 , where n replaced with (n  + 1), by the induction hypothesis

this implies to

1 + 2 + 3 + …. + n + 1 = (n + 1) (n + 2) /2 , so we will prove this and it will proved the theorem.

Now takes L. H .S

=> 1 + 2 + 3 + ….. +  (n + 1) = 1 + 2 + 3 + ….. + n + n + 1 , (n + 1 comes after n)

we know 1 + 2 + 3 + …. + n = n(n+1)/2

=> n(n+1)/2 + (n + 1)

=> (n2 + n + 2n + 2)  / 2

=> (n(n + 1) + 2(n+1)) / 2  , by distribution of division over addition (or factorization)

=> (n + 1) (n + 2) / 2 = R.H.S

Also Read: Pigeon Hole Principle Mathematical Preliminaries Part 3


The post Proof by Induction – Mathematical Preliminaries Part 4 appeared first on .

]]>
281
Mathematical Statement https://www.theoryofcomputation.co/mathematical-statement/ Tue, 16 Oct 2018 18:37:47 +0000 https://www.theoryofcomputation.in/?p=276 Mathematical Statement For understanding any mathematical statement we first need to recollect what maths is basically. When we solve any problem in maths our solution is either right or wrong. There is no midway to the problems! Similar is the situation with any mathematical statement. A mathematical statement is either true or false. Mathematical Statement...

The post Mathematical Statement appeared first on .

]]>
Mathematical Statement

For understanding any mathematical statement we first need to recollect what maths is basically. When we solve any problem in maths our solution is either right or wrong. There is no midway to the problems! Similar is the situation with any mathematical statement. A mathematical statement is either true or false.

Mathematical Statement Definition:

A statement (or proposition) is a sentence that is either true or false (both not both).

So ‘3 is an odd integer’ is a statement. But ‘π is a cool number’ is not a (mathematical) statement. Note that ‘4 is an odd integer’ is also a statement, but it is a false statement.

Any statement which is predicted to be both cannot be a mathematical statement. For understanding this we take three sentences:

  • The first prime minister of United States was a woman.
  • Blue Whale is the largest animal on Earth.
  • Girls are intelligent than boys.

The first statement is false while the second is true, but when we consider the third statement for some it is true while for others it is false. All girls are not intelligent than boys. So a statement which is either true or false is called a mathematical statement.

Every statement that is either true or false is said to be a mathematically accepted one, hence is called a mathematical statement.

Mathematical Statement In Discrete Mathematics

A meaningful composition of words which can be considered either true or false is called a mathematical statement or simply a statement.

A single letter shall be used to denote a statement. For example, the letter ‘p’ may be used to stand for the statement “ABC is an equilateral triangle.” Thus, p = ABC is an equilateral triangle.

Production of New Statement

New statements from given statements can be produced by:

  1. Negation: ∼
    If p is a statement then its negation ‘∼p’ is statement ‘not p’. ‘∼p’ has truth value F or T according to the truth value of  ‘p’ is T or F.
  2. Implication: ⇒
    If from a statement p another statement q follows, we say ‘p implies q’ and write ‘p⇒ q’. Such a result is called an implication. The truth value of ‘p ⇒ q’ is F only when p has truth value T and q has the truth value F.
    The statements involving ‘if p holds then q’ are of the kind p ⇒ q. For example, x= 2 ⇒ x2 = 4.
  3. Conjunction: ∧
    The sentence ‘p and q’ which may be denoted by ‘p ∧ q’ is the conjunction of p and q. The truth value of p ∧ q is T only when both p and q are true.
  4. Disjunction:  ∨
    The sentence ‘p or q (or both)’ which may be denoted by ‘p ∨ q’ is called the disjunction of the statements p and q. The truth value of p ∨ q is F only when both p and q are false.

Equivalence of Two Statements, p⇔q

Two statements p and q are said to be equivalent if one implies the other, and in such a case we use the double implication symbol ⇔ and write p ⇔ q.

The statements which involve the phrase ‘if and only if’ or ‘is equivalent to’ or ‘the necessary and sufficient conditions’ are of the kind p ⇔ q. For example, ABC is an equilateral triangle AB = BC = CA.

For brevity, the phrase ‘if and only if’ is shortened to “iff”. As described above, the symbols ∧ and ∨  stand for the words ‘and’ and ‘or’ respectively. The disjunction symbol ∨ is used in the logical sense ‘or’. The symbols ∧, ∨ are logical connectives and are frequently used.

The following is the table showing truth values of different compositions of statements. Such tables are called truth tables.

p
q
∼ p

∼ q

⇒ q

 q

∨ q

⇔ q

T
T
F
F
T
T
T
T
T
F
F
T
F
F
T
F
F
T
T
F
T
F
T
F
F
F
T
T
T
F
F
T

By forming truth tables, the equivalence of various statements can easily be ascertained. For example, we shall easily see that the implication ‘p ⇒ q’ is equivalent to ‘∼p ⇒ ∼q’. The implication ‘∼q ⇒ ∼p’ is called the contrapositive of p ⇒ q.

Read Also: Pigeon Hole Principle Mathematical Preliminaries Part 3

Example:

Question: Consider the statement, Given that people who are in need of refuge and consolation are apt to do odd things, it is clear that people who are apt to do odd things are in need of refuge and consolation. This statement, of the form (P ⇒ Q) ⇒ (Q ⇒ P) is logically equivalent to people

  1. who are in need of refuge and consolidation are not apt to do odd things
  2. that are apt to do odd things if and only if they are in need of refuge and consolidation
  3. who are apt to do odd things are in need of refuge and consolidation
  4. who are in need of refuge and consolidation are apt to do odd things

Solution: Option 3. People who are apt to do odd things are in need of refuge and consolidation. Given statement is “people who are in need of refuge and consolation are apt to do odd things”. It is in the form of p where p is “in need of refuge and consolation” and q is “apt to do odd things”.

So  q⇒p is equivalent to “people who are apt to do odd things are in need of refuge and consolation”. Therefore option 3 is correct.

The post Mathematical Statement appeared first on .

]]>
276
Pigeon Hole Principle Mathematical Preliminaries Part 3 https://www.theoryofcomputation.co/pigeon-hole-principle-mathematical-preliminaries-part-3/ Mon, 08 Oct 2018 19:16:49 +0000 https://www.theoryofcomputation.in/?p=266 Pigeon Hole Principle If n+1 or more objects are placed into n boxes, then there is at least one box containing two or more objects. In other words, if A and B  are two sets such that |A| > |B|, then there is no one-to-one function from A to B. Theorem 1: Let n be a positive integer. Every...

The post Pigeon Hole Principle Mathematical Preliminaries Part 3 appeared first on .

]]>
Pigeon Hole Principle

If n+1 or more objects are placed into n boxes, then there is at least one box containing two or more objects. In other words, if and  are two sets such that |A| > |B|, then there is no one-to-one function from A to B.

Theorem 1: Let n be a positive integer. Every sequence of n2 + 1 distinct real numbers contains a subsequence of length n+1 that is either increasing or decreasing.

Proof. For example consider the sequence (20, 10, 9, 7, 11, 2, 21, 1, 20, 31) of 10 = 32 + 1 numbers. This sequence contains an increasing subsequence of length 4 = 3 + 1, namely (10, 11, 21, 31).

The proof of this theorem is by contradiction, and uses the pigeon hole principle.

Let (a1, a2,…..,an2+1) be an arbitrary sequence of n2 + 1 distinct real numbers. For each i with 1 ≤ i ≤ n2 + 1, let inci denote the length of the longest increasing subsequence that starts at ai, and let deci denote the length of the longest decreasing subsequence that starts at ai
Using this notation, the claim in the theorem can be formulated as follows:

There is an index i such that inci ≥ n + 1 or deci ≥ n + 1.

We will prove the claim by contradiction. So we assume that inci ≤ n and deci ≤ n for all i with 1 ≤ i ≤ n2 + 1.
Consider a set

B = {(b,c): 1 ≤ b ≤ n, 1 ≤ c ≤n },

and think of the elements of B as being boxes. For each i with 1 ≤ i ≤ n2 + 1, the pair (inci, deci) is an element of B. So we have n2 + 1 elements (inci, deci), which are placed in the n2 boxes of B. By the pigeon hole principle, there must be a box that contains two (or more) elements. In other words, there exist two integers i and j and

(inci,deci) = (incj,decj)
.
Recall that the elements in the sequence are distinct. Hence, ai ≠ aj. We consider two cases.
First assume that ai < aj . Then the length of the longest increasing subsequence starting at ai must be at least 1 + incj, because we can append ai to the longest increasing subsequence starting at aj. Therefore, inci ≠ incj which is a contradiction.
The second case is when ai > aj. Then the length of the longest decreasing subsequence starting at ai must be at least 1 + decj, because we can append ai to the longest decreasing subsequence starting with aj. Therefore, deci ≠ decj, which is again a contradiction.

The post Pigeon Hole Principle Mathematical Preliminaries Part 3 appeared first on .

]]>
266