AMID the debris of a ruined city, as guerillas scuttle from one pile of rubble to another to escape the attentions of enemy patrols, a foot stamps down on to a human skull, crushing it into dust. The foot is metal, belonging to a robot, its red eyes determined to wipe out the last remnants of humankind.

This apocalyptic scene from Arnold Schwarzenegger's The Terminator is part of a sizeable strand in cinema over the past 30 years, itself playing on one of our darkest fears of technology: where computers turn on their human masters. From 2001: A Space Odyssey to the Matrix, one of humanity's greatest aids has been portrayed as the monster which will seek to destroy its creator.

Steven Spielberg's latest film is an extension of this trend. AI Artificial Intelligence is ostensibly a futuristic reworking of the Pinocchio tale, with computer chips taking the place of wood. But it is also a warning of the dangers lurking in computer technology, and expresses the fear that, if robots develop minds of their own, they might realise they do not need humans any more.

But, as well as exaggerating the dangers, this view also ignores the benefits of artificial intelligence, according to Professor John MacIntyre, director of the centre for adaptive systems at Sunderland University.

"Artificial intelligence has had a very bad press in popular culture. There is a common theme, where a machine has taken over from people and been a threat. But AI is a very powerful set of tools, which can solve problems traditional computing can't, and it has been used to make a positive effect in medicine, industry and engineering," he says.

Whereas traditional computers can carry out specific tasks, the idea behind AI is that it can mimic the ability of humans to reason or to learn. One example of its use could be in face recognition technology, perhaps for security reasons.

Ask a computer to recognise a face and if it gets an exact match, there is no problem. But if the face has changed, either through ageing, or through wearing glasses, then the computer has a problem. But people can still recognise the face, through being able to recognise the same pattern, and it is this ability that AI could replicate.

"Instead of taking a traditional approach, some parts of AI are modelled on the human learning process," says Prof MacIntyre. "Traditional computing does lots of calculations very quickly and with great precision, but it can't do intuitive gut feelings. AI can do those things."

But the ability of robots to learn raises the prospect of them one day becoming more intelligent than their human masters. This may be some time in the future, but it is a real possibility, according to Professor Jim Yip, who has spent 20 years researching into AI, specialising in knowledge-based systems.

"A computer doubles its capabilities every 18 months, which means every three years it is four times as powerful," he says. "Looking at how a computer can improve itself is a major research area, and, in theory, you can write a programme that allows a computer to improve its knowledge.

"Computers learn by example. Every time they do something and it is wrong they remember it and don't do it again, and, if it is right, they remember and do it again."

Along with increasing knowledge, computers would also need to develop self-awareness, knowing what they are and whether they have a choice in particular decisions, to become a threat. And this may be just a matter of size, according to Prof Yip, who is director of the school of computing and mathematics at Teesside University.

He admits it is technically possible that artificial intelligence could outstrip human brainpower, and that, once computers have developed a level of consciousness, and the ability to make choices for themselves, there could be no stopping them.

"Humans have about two billion brain cells. There are about 200 million PCs, all linked together through the Internet," he says. "It could reach a size where the world becomes a mass consciousness and computers realise they exist. The question is, what happens when computer power gets to a certain size and realises it exists? This is not science fiction, it actually could happen.

"The technology could be put to good use or it could be dangerous, depending on how we use it, but at least you can switch a computer off. At the moment robots are pretty stupid, so it is alright, but I can't say it will be the same in 300 years. I do think human beings have to think more carefully."

But for Prof MacIntyre, artificial intelligence is much more likely to benefit humanity than to seek to destroy it. But he admits that how robotics develop in the future is very much up in the air.

'Within 20 or 30 years, people will have household appliances with intelligence, that will be able to do menial tasks around the home," he says. "It is not going to be a stand-up, human-type robot, but some sort of little robot which will be able to do the hoovering and navigate its way around a room and work out where things are, not simply a machine that you switch on.

"Where it goes beyond that, nobody really knows. There is a debate going on in the AI research community about where it is going and the ethical issues. If you have developed this robot that has artificial intelligence and it suddenly becomes autonomous, can you switch it off? There is also the question of when an artificially intelligent machine actually develops emotions, and nobody knows that. And if it does, when will we know, and what do we do?"

But, whereas the past 20 years have seen enormous advances in AI and it can, in some cases, surpass human ability, it still has very limited scope, according to Prof MacIntyre. "AI is extremely clever in solving one particular problem, but knows absolutely nothing about anything else.

"The most intelligent system that has been developed so far roughly equates to the intelligence of a sea-slug. We are a long way from even mammals, let alone primates or humans," he says.

"Maybe, at some time in the future, AI will allow such complex models that some level of consciousness will exist, but I would not expect that anything like that would be seen in my lifetime.

"AI is a very powerful tool but it is nevertheless a tool for our use. We should try and view AI in a much more positive light than simply as machines taking over the world."