RSS

Sidik jari biometrik gantikan peran password

SIDIK JARI BIOMETRIK GANTIKAN PERAN PASSWORD

Sebuah perusahaan asal Inggris berhasil mengembangkan aplikasi biometrik terbaru yang siap menggantikan peran akses login dengan password pada jaringan komputer.

Seperti dilansir Techworld dan dikutip detikINET, Senin (5/3/2007) sistem yang telah dipatenkan ini diberi nama MatchLogon, dan didesain dari pengembangan sistem biometrik konvensional.

Selain menggunakan sidik jari yang unik untuk masuk ke dalam suatu jaringan atau PC (personal computer), keamanan sistem ini juga dilengkapi dengan rangkaian perintah acak yang hanya diketahui oleh user yang berhak. Sehingga jika penyerang yang ingin menyelinap ke sistem ini, mereka membutuhkan tidak hanya empat sidik jari user tetapi juga harus dapat memberikan urutan perintah yang tepat.

Perusahaan pembuatnya yakin teknologi ini akan lebih memberikan rasa aman terutama bagi yang user bergerak di sektor keuangan dan pusat informasi yang membutuhkan sistem manajemen keamanan yang tinggi. Pasalnya, peluang keberhasilan untuk bisa membobol alat ini mencapai 1 dari 10 juta kemungkinan.

Pembuatan teknologi ini disinyalir sebagai cara baru dalam memberikan alternatif akses keamanan login melalui password yang dirasa sudah tidak terlalu aman.

Selama 2006, MatchLogon telah diujicobakan pada beberapa perusahaan di Inggris serta Eropa, dan FingerPin sebagai pembuatnya akan segera bernegosiasi dengan penyedia sistem integrasi untuk mendorong menawarkan sistem keamanan ini.

"Kami yakin (teknologi) ini kuat dan sangat tepat bagi user. Jaringan dan aplikasi tidak lagi membutuhkan login dengan mengingat password Anda," ujar Martine Laffan dari FingerPIN.

Sementara itu, untuk dapat mengaplikasikan teknologi ini perusahaan harus merogoh kocek sebesar 1.000 poundsterling (1 pound = Rp 17.720, sumber: xe.com) untuk server dengan tambahan 49,99 pound per klien.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

Lubang Hitam(black hole) antara manusia dengan komputer

Otak Difungsikan atau Tidak?


Organ tubuh yang bernama “OTAK”, adalah sesuatu yang tidak dapat ditandingi oleh komputer manapun di dunia dengan tingkat kemampuan memori dan kecepatan yang canggih sekalipun. Ya memang, ada komputer atau sejenis alat hitung yang bekerjanya lebih cepat dari beberapa otak manusia pada zaman tertentu. Akan tetapi si “OTAK” inilah yang menuangkan ide-idenya kedalam bentuk alat-alat elektronika yang oleh manusia diberi nama Komputer sampai dengan alat tersebut dapat melakukan kerja berhitung atau yang lain melampaui beberapa otak manusia pada zaman tertentu dan tempat tertentu.

Jadi, meskipun secara bentuk si “OTAK” memiliki ukuran kecil saja, namun ia adalah bagian yang dapat membentuk watak/tabiat dan perilaku serta menjadi salah satu faktor dari seorang manusia dalam menjalin /memiliki hubungan kasih dengan Allahnya selama hidup di dunia (bandingkan Matius 22 : 37 “kasihilah Tuhan Allahmu, dengan segenap hatimu dan dengan segenap jiwamu dan dengan segenap akal budimu”).

Perlu diingat bahwa setiap manusia dikaruniakan organ otak yang berkembang secara unik, sekalipun pada otak bayi yang lahir kembar identik. Kesamaan genetika mereka termodifikasi dengan variasi dalam otak dan sistim saraf masing-masing. Sejak dari embrio janin sampai dengan bayi yang baru lahir, pada masa itu terjadi proses multiplikasi, migrasi dan kolonisasi sel-sel dalam membentuk susunan otak dan sistem saraf pusat. Yang menarik dalam proses tersebut adalah adanya suatu molekul yang membimbing sel agar membentuk organ yang tepat.

Dalam proses pembentukan organ otak, terjadi pergerakan/perpindahan sel-sel didalamnya. Sementara sistem sentral saraf melakukan perjalanan panjang, secara berkelompok dalam tujuannya, untuk memastikan tidak ada otak yang sama antara satu otak dengan otak yang lainnya. Dan unsur pembentuk sistem saraf dalam organ otak adalah NEURON (Sel Saraf) yang dihasilkan oleh otak itu sendiri. Sebagai contoh, dalam minggu ketiga masa kehamilan, organ otak dapat menghasilkan 250.000 Neuron setiap menit. Sebab dengan adanya Neuron inilah yang memungkinkan manusia melakukan aktifitasnya seperti berfikir, berbicara, s.d. melakukan tindakan berolah raga, misalnya.

Cara kerja Neuron sendiri sudah terorganisir sedemikian rupa dengan fungsi yang berbeda-beda. Ada 3 bagian khusus dari Neuron yang mengatur cara kerjanya: a) tindakan yang berhubungan dengan dunia luar (seperti: melihat pemandangan, mendengarkan musik, merasakan panasnya sinar matahari, menyentuh buku atau mengecap makanan); b) Hal-hal yang berkaitan dengan otot, gerakan, dsb; c) Yang berkaitan dengan kemampuan seperti: berfikir, mengingat, membayangkan dan kemampuan lainnya. Prof. Susan Greenfild, pakar Neuron dari Oxford mengatakan bahwa satu Neuron dapat mengirimkan transmisi kimia 500 kali dalam satu detik. Sementara tiap transmisi itu memiliki fungsi yang berbeda.

Yang penting adalah perlu adanya informasi Auto-dialing yang teratur/tetap guna membentuk sistem saraf di atas. Sebab jika terjadi gangguan seperti: ketagihan obat bius, terinveksi virus dan malnutrisi, dapat merusak proses pembentukan. Pada saat manusia dilahirkan (bayi), ia memiliki sekitar satu milyar sel otak, dan akan berkembang secara luar biasa menjadi triliyunan jaringan baru. Sedang pada usia 2 tahun, otak kanak-kanak memiliki sinapsis atau simpul saraf dua kali lebih banyak dibandingkan dengan yang dimiliki otak manusia dewasa. Begitu juga dengan energi yang dimilikinya adalah dua kali lebih besar.

Dari mulai di dalam rahim, pengalaman-pengalaman yang dialami oleh janin akan mempengaruhi pembentukan jaringan yang dibuat. Proses pertumbuhan yang baik ini mengalami perhentian pada saat manusia berumur 10 tahun. Dan beberapa tahun kemudian, organ otak mulai mengalami penyesuaian karena adanya sinapsis atau simpul saraf yang rusak. Dari proses penyesuaian ini menyebabkan tingkat kekenyalan otak yang menurun, namun tingkat kekuatannya mengalami peningkatan. Itulah sebabnya pada usia kurang lebih 18 tahun keatas - umumnya - kepribadian seseorang mulai terbentuk (dari hasil kerja dan pengaruh kompleks antara Gen dan pengalaman pribadi).

Sejalan dengan bertambahnya usia seseorang, maka organ otak akan mengalami penyusutan dan hal ini akan menjadi semakin tampak pada manusia di usia manula. Sebagai contoh, pada usia manusia mencapai 70 tahun, volume otak akan mengalami penurunan 10 % untuk setiap 10 tahun. Jadi jika manusia memasuki usia 80 tahun, maka volume otaknya sudah mengalami penurunan 20 %.

Proses penyusutan volume otak bukanlah sesuatu hal yang perlu dikhawatirkan, sebab proses ini bukan berarti manusia akan mengalami penurunan tingkat kemampuan atau menjadi bodoh. Yang menentukan optimal atau tidaknya fungsi otak ialah hubungan antar jaringan otak atau interkoneksi antar Neuron itu sendiri yang penting.

Oleh karena itu, untuk memelihara agar hubungan antar jaringan tetap baik, maka organ otak harus difungsikan atau diaktifkan setiap saat. Itulah sebabnya sang nenek yang beberapa lalu aku temui itu harus tetap diajak untuk berkomunikasi, berpikir dan membaca agar jaringan otaknya tetap “hidup”. Mungkin dahulunya sang nenek sering diajak untuk berkomunikasi atau bercerita oleh anak cucunya, namun belakangan ini kurang ada yang mengajak berbicara, sehingga organ otaknya kurang difungsikan pada masa lalunya.”

Aku sungguh kagum, heran dan penuh syukur atas segala proses terbentuknya organ otak dalam diri manusia. Otak yang dikaruniakan oleh sang Pencipta memiliki kekuatan melampaui komputer tercanggih manapun. Oleh karena itu, otak harus digunakan sebaik mungkin agar bermanfaat termasuk dalam menanggapi sikap-Nya yang mengasihi manusia. Jika tidak digunakan maka manusia akan kehilangan “keberbedaan/kekhususan” sebagai mahkota ciptaan dibandingkan dengan ciptaan lainnya (seperti: alam semesta dan hewan). Hal ini juga mengingatkanku akan kuasa-Nya dalam menciptakan manusia seperti yang disaksikan oleh Pemazmur dalam kitab Mazmur pasal 139 ayat 13 - 14 :

“Sebab Engkaulah yang membentuk buah pinggangku, menenun aku dalam kandungan ibuku. Aku bersyukur kepadaMu oleh karena kejadianku dahsyat dan ajaib; ajaib apa yang Kau buat, dan jiwaku benar-benar menyadarinya.”

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

Mechanisms A.I

Mechanisms A.I

This article or section is in need of attention from an expert on the subject.
WikiProject Computers or the Computers Portal may be able to help recruit one.If a more appropriate WikiProject or portal exists, please adjust this template accordingly.
Expert systems were one of the earliest types of AI system. They are built around automated
inference engines including forward reasoning and backwards reasoning. Based on certain conditions ("if") the system infers certain consequences ("then").
In terms of consequences, AI applications can be divided into two types: classifiers ("if shiny then diamond") and controllers ("if shiny then pick up"). Controllers do however also classify conditions before inferring actions, and therefore classification forms a central part of most AI systems.
Classifiers make use of pattern recognition for condition matching. In many cases this does not imply absolute, but rather the closest match. Techniques to achieve this divide roughly into two schools of thought: Conventional AI and Computational intelligence (CI).
Conventional AI research focuses on attempts to mimic human intelligence through symbol manipulation and symbolically structured knowledge bases. This approach limits the situations to which conventional AI can be applied. Lotfi Zadeh stated that "we are also in possession of computational tools which are far more effective in the conception and design of intelligent systems than the predicate-logic-based methods which form the core of traditional AI." These techniques, which include
fuzzy logic, have become known as soft computing. These often biologically inspired methods stand in contrast to conventional AI and compensate for the shortcomings of symbolicism.These two methodologies have also been labeled as neats vs. scruffies, with neats emphasizing the use of logic and formal representation of knowledge while scruffies take an application-oriented heuristic bottom-up approach.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

Science fiction film AI




Artificial Intelligence: A.I.
From Wikipedia, the free encyclopedia





Artificial Intelligence: A.I. is a 2001 science fiction film co-produced, written, and directed by Steven Spielberg. It was the last project on which filmmaker Stanley Kubrick worked — he died before the film started shooting, and Spielberg dedicated the film to him.
The film won five
Saturn Awards, including Best Science Fiction Film. It was nominated for Academy Awards for Best Effects, Visual Effects and Best Music, Original Score.



PLOT

The story is set sometime in the future. Global warming has led to an ecological disaster resulting in a drastic reduction of the human population and rising sea levels. Cities like New York City, and Venice lie in ruins. Mankind’s efforts to maintain civilization lead to the creation of android artificial intelligence. These efforts culminate with the creation of David, an android child (or “mecha”) programmed with the ability to love. Cybertronics, the company that created David, wish to test their latest creation on a loving couple wanting a child. They approach one of their employees, Henry Swinton, with the idea.

Android Gigolo Joe (Jude Law)
Henry and Monica Swinton are a married couple whose son, Martin, is dying of a rare illness. Hoping for a cure, the Swintons have their son placed in a state of
suspended animation until such a cure can be found. The emotional toll of such a long, chronic illness nearly shatters the marriage. Henry agrees to bring David home to Monica. Although Monica is initially frightened of the android, she eventually warms to him after activating his imprinting protocol, which irreversibly causes David to feel love for her as a child loves a parent.

David, holding Teddy, and Joe in Rouge City.
As he continues to live with the Swintons, David is befriended by Teddy, a mecha toy, that takes upon itself the responsibility of David’s well being. Martin is eventually cured of his illness and is brought home. The two are expected to live together like brothers. A
sibling rivalry builds between the two, however. Martin’s jealousy prompts him to manipulate David into more and more irrational behavior but Martin’s scheming backfires when he and his friends activate David’s self-protection programming at a pool party. Thinking himself in danger, David grabs hold of Martin, begging him to "keep me safe." David falls into the pool, taking Martin with him, and sinks to the bottom. Martin is saved from drowning but David’s actions prove too much for Henry; he wants David returned to the manufacturer.
Fearing that David will be destroyed, Monica instead releases both David and Teddy into the forest to live as unregistered mechas, warning David to avoid the “flesh fairs,” events where mechas are destroyed before cheering crowds by anti-mecha groups. David is soon captured and nearly destroyed at such a flesh fair and narrowly escapes with the help of Gigolo Joe, a
prostitute mecha, who is on the run after being framed for the murder of one of his clients. The two become friends and set out to find the Blue Fairy, who David remembers from the fairy tale "Pinocchio" as a being who has the power to turn him into a real boy. If he becomes a real boy, he imagines, Monica will love him and take him back. Joe and David make their way to the decadent metropolis known as Rouge City, a somewhat futuristic Las Vegas, in search of the knowledge that will lead them to the Blue Fairy.

New York City now in ruins and submerged after the effects of Global Warming.
An oracular computer personality called
Dr. Know eventually leads David, with Joe in tow on the run from the authorities, to his manufacturers' laboratory at the top of a building at the Rockefeller Center in the flooded ruins of Manhattan. He is greeted by his human creator, Professor Hobby, who is unsurprised to see him there. His creator excitedly tells David that his arrival at the planned destination demonstrates the true, 'realistic' nature of David's (artificially-created) emotions, because he was driven by his love for his mother and desperation to be with her. To Hobby, this proves that David is a success as a robot model and the line of David replicates will be fit for the general market. Disheartened, David leaves and falls from the office into the ocean, possibly trying to commit suicide.

New York City 2,000 years later.
David falls to the streets of submerged Manhattan and sees, what he thinks is the Blue Fairy, but is suddenly rescued by Gigolo Joe. At the surface, Gigolo Joe is taken by the authorities, leaving David and Teddy to find the Blue Fairy in the ruins of
Coney Island. While there in the submersible, David finally encounters the Blue Fairy, a statue from one of the rides at the park. Naïvely believing it to be the real Blue Fairy, he makes his wish to be turned into a real boy. He waits for the wish to come true, repeating it into infinity, with Teddy by his side and is ultimately trapped underwater when the park's ferris wheel falls on his vehicle.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

History AI



Artificial intelligence
From Wikipedia

"AI" redirects here. For other uses of "AI" and "Artificial intelligence", see AI (disambiguation).

The modern definition of artificial intelligence (or AI) is "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success.[1] John McCarthy, who coined the term in 1956,[2] defines it as "the science and engineering of making intelligent machines."[3] Other names for the field have been proposed, such as computational intelligence,[4] synthetic intelligence[4][5] or computational rationality.[6] The term artificial intelligence is also used to describe a property of machines or programs: the intelligence that the system demonstrates.
AI research uses tools and insights from many fields, including
computer science, psychology, philosophy, neuroscience, cognitive science, linguistics, operations research, economics, control theory, probability, optimization and logic.[7] AI research also overlaps with tasks such as robotics, control systems, scheduling, data mining, logistics, speech recognition, facial recognition and many others.[8]
History
The field was born at a conference on the campus of Dartmouth College in the summer of 1956.[9] Those who attended would become the leaders of AI research for many decades, especially John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, who founded AI laboratories at MIT, CMU and Stanford. They and their students wrote programs that were, to most people, simply astonishing:[10] computers were solving word problems in algebra, proving logical theorems and speaking English.[11] By the middle 60s their research was heavily funded by DARPA,[12] and they were optimistic about the future of the new field:
1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do"[13]
1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."[14]
These predictions, and many like them, would not come true. They had failed to anticipate the difficulty of some of the problems they faced: the lack of raw computer power,[15] the intractable combinatorial explosion of their algorithms,[16] the difficulty of representing commonsense knowledge and doing commonsense reasoning,[17] the incredible difficulty of perception and motion[18] and the failings of logic.[19] In 1974, in response to the criticism of England's Sir James Lighthill and ongoing pressure from congress to fund more productive projects, DARPA cut off all undirected, exploratory research in AI. This was the first AI Winter.[20]
In the early 80s, the field was revived by the commercial success of expert systems and by 1985 the market for AI had reached more than a billion dollars.[21] Minsky and others warned the community that enthusiasm for AI had spiraled out of control and that disappointment was sure to follow.[22] Minsky was right. Beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, more lasting AI Winter began.[23]
In the 90s AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence was adopted throughout the technology industry, providing the heavy lifting for logistics, data mining, medical diagnosis and many other areas.[24] The success was due to several factors: the incredible power of computers today (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and above all a new commitment by researchers to solid mathematical methods and rigorous scientific standards.[25]
1961-65 -- A.L.Samuel Developed a program which learned to play checkers at Masters level.
1965 -- J.A.Robinson introduced resolution as an interface method in logic.
1965 -- Work on DENDRAL was begun at Stanford University by J.Lederberg, Edward Feigenbaum and Carl Djerassi. DENDRAL is an expert system which discovers molecule structure given only informaton of the constituents of the compound and mass spectra data. DENDRAL was the first knowledge-based expert system to be developed.
1968 -- Work on MACSYMA was initiated at MIT by Carl Engleman, William Martin and Joel Moses. MACSYMA is a large interactive program which solves numerous types of mathamatical problems. Written in LISP, MACSYMA was a continuation of earlier work on SIN, an indefinite integration solving problem
References on early work in AI include Pamela McCorduck's Machines Who think (1979) and Newell and Simon's Human Problem Solving (1972).

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS