Tuesday, July 29, 2008

Network telescope


A network telescope (also known as a darknet, internet motion sensor or black hole) is an internet system that allows one to observe different large-scale events taking place on the Internet. The basic idea is to observe traffic targeting the dark (unused) address-space of the network. Since all traffic to these addresses is suspicious, one can gain information about possible network attacks (random scanning worms, and DDoS backscatter) as well as other misconfigurations by observing it.

The resolution of the Internet telescope is dependent on the number of dark addresses it monitors. For example, a large Internet telescope that monitors traffic to 16,777,216 addresses (a /8 Internet telescope in IPv4), has a higher probability of observing a relatively small event than a smaller telescope that monitors 65,536 addresses (a /16 Internet telescope).

A variant of a network telescope is a sparse darknet, or greynet, consisting of a region of IP address space that is sparsely populated with 'darknet' addresses interspersed with active (or 'lit') IP addresses.

Monday, July 21, 2008

Object-relational database


An object-relational database (ORD) or object-relational database management system (ORDBMS) is a database management system (DBMS) similar to a relational database, but with an object-oriented database model: objects, classes and inheritance are directly supported in database schemas and in the query language. In addition, it supports extension of the data model with custom data-types and methods.

One aim for this type of system is to bridge the gap between conceptual data modeling techniques such as Entity-relationship diagram (ERD) and object-relational mapping (ORM), which often use classes and inheritance, and relational databases, which do not directly support them.

Another, related, aim is to bridge the gap between relational databases and the object-oriented modeling techniques used in programming languages such as Java (programming language), C++ or C#. However, a more popular alternative for achieving such a bridge is to use a standard relational database systems with some form of ORM software.

Monday, July 14, 2008

Software transactional memory
In computer science, software transactional memory (STM) is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing. It functions as an alternative to lock-based synchronization, and is typically implemented in a lock-free way. A transaction in this context is a piece of code that executes a series of reads and writes to shared memory. These reads and writes logically occur at a single instant in time; intermediate states are not visible to other (successful) transactions. The idea of providing hardware support for transactions originated in a 1986 paper and patent by Tom Knight. The idea was popularized by Maurice Herlihy and J. Eliot B. Moss. In 1995 Nir Shavit and Dan Touitou extended this idea to software-only transactional memory (STM). STM has recently been the focus of intense research and support for practical implementations is growing.

Tuesday, July 08, 2008

Optimization


Optimization or optimality is a term that may refer to:

* Optimization (mathematics), trying to find maxima and minima of a function
* Optimization (computer science), improving a system to reduce runtime, bandwidth, memory requirements, or other property of a system; in particular
o Compiler optimization, improving the performance or efficiency of compiled code
* Search engine optimization, in internet marketing, methodologies aimed at improving the ranking of a website in search engine listings
* Process optimization, in business and engineering, methodologies for improving the efficiency of a production process
* Product optimization, in business and marketing, methodologies for improving the quality and desirability of the current product or a product concept
* Optimality theory in linguistics.
* Optimal classification, a process which arranges classification element attributes in an order which minimizes the number of queries necessary to identify any particular element.

Wednesday, July 02, 2008

Blacklisting

Attempts to stop spam by blacklisting sender's IP addresses still allows a small percentage through. Most IP addresses are dynamic, i.e. they are frequently changing. An ISP, or any organization directly connected to the Internet, gets a block of real Internet addresses when they register in the DNS. Within that block, they assign individual addresses to customers as needed. A dial-up customer may get a new IP address each time they connect. By the time that address appears on blacklists all over the world, the spammer will have new addresses for the next run. There are 4 billion possible IPv4 addresses on the Internet. The game of keeping up with these rapidly changing IP addresses has been facetiously called "whack-a-mole".

So called policy lists are black lists that contain IP addresses on a preventive basis. An IP address can be listed therein even if no spam has ever been sent from it, because it has been variously classified as a dial-up address, end-user address, or residential address, with no formal definition of such classification schemes. Not requiring evidence of spam for each enlisted address, these lists can collect a greater number of addresses and thus block more spam. However, the policies devised are not authoritative, since they have not been issued by the legitimate user of an IP address, and the resulting lists are therefore not universally accepted.