SemanticWeb

From WikiWorld

Jump to: navigation, search

The entire Internet is becoming one humongous knowledge base because WE are following the semantic web architecture and standards.

http://www.w3.org/2001/sw/
http://www.daml.ecs.soton.ac.uk/Resources.html
http://www.systinet.com/resources/tutorials


Now, if that isn't a GrandDelusion, I don't know what one is.


WE are not doing it only because it is still too unfamiliar. Getting into RDF & Semantic Web using N3 is something we each should do sooner or later using OWL vocabularies and Proof Markup Language for Semantic Web Services. Once a few of us start doing it the rest will learn by example like OneHundredMonkeys. Its WhatWeCanDo in addition to adding ideas and links or just lurcking. With the news of PML the Semantic Web is now rich enough to be a top candidate for modeling InformationPhysics, and applying it to everything. Using N3 notation and Semantic Web tools ObjectWiki has already arrived we just don't know it yet.

Its only a GrandDelusion if WE don't get a RoundToit. Proof Markup Language gives us a means of distinguishing fact from delusion, used properly. We each can still choose to use it properly or not.

-- JimScarver


SemanticWeb would be nice, but the truth of the Internet is that it is one big con, although BigBusiness is trying to make it one big con selling you something. They really don't care what the something is, so long as they get a buck while you get your jollies.

Before 1994 there was no commercial Internet. 01.2KB access costs were $10 per hour nationally and up to $50 or more internationally. The only hope I saw for universal access was commercializing it, like TV, access could be free with economies of scale and advertising.

It worked! Up to 56KB to 1000KB Internet is almost free, and in spite of the pop ups and banners etc. there was an information explosion making nearly the total knowledge of humanity available somewhere on the network if you can find it.

But, business does not control the Internet, everyone is a publisher, and the libraries of publicly available knowledge base oncologies is increasing steadily. Once the tools are more mature, everyone who cares to will contribute to the growing web wide knowledge base in a knowledge explosion.

What the SemanticWeb standards do is allow all the separate oncologies published anywhere to by utilized intelligently in knowledge bases and expert systems. Not only is the information out there, it becomes usable.

It is not a grand delusion, it is the future.


Well Jim, I hope you are right and I am wrong, but my life experience and personal knowledge base suggests that it is a GrandDelusion.

Most people I know that aren't on dial up pay $50 a month for Net Access, or more. Dial Up users in the States can get very cheap access at $10 a month currently, but have to put up with their time online being only 1/4 effective and constant bombardment by marketing in generally non-marketing atmospheres for other users.

SemanticWeb might grow to something useful, but so what? The vast majority of the Web currently is about making money or sex. A good quarter of it is purely money scams. Another 20% of it is pure hate literature. If you use a system of agreement to evaluate what is the truth, all women love to have non-consensual sex and service barnyard animals as well as lose 20 pounds, all men need to have their penises enlarged by 6 inches, all African descended people are mentally impaired, etc...

I do not believe that sort of garbage. But search engines now go by bulk and weight their results by how many other sites agree with them as well as link with them. Studies have shown that 80% of the people that use the Internet will believe anything, so long as it is presented professionally, and 60% of Internet users will even believe stuff they know in real life to be false when presented professionally on the Internet.

At this time, I believe that the final result will be that SemanticWeb will not do anything but lend weight to the scammers and marketers, and hide the real knowledge that is available. I really hope I'm wrong, but human greed and human behavior favors my view, I believe. Semantics will just be used to meta weight pages even more in search engines, and the only people that do that sort of thing, are not out to share "scientific truth", just their personal propaganda or scam.


We are all publishers on the semantic web, our shared knowledge base is already out of control by anyone. All the public domain ontologies provide leverage for individuals to contribute significantly to our CollectiveIntelligence knowledge pool.


Not really.

The US Government is examining taking over Google. And a couple of other search engines. Because it is the Search Engine which is the doorway to our collected knowledge on the Net.

And it's not only the government. Many corporate and social entities understand that. They patrol the search engines to insure no information they disagree with will be presented or pointed to. Or they sue the search engine. Sciencetology's history online is full of such examples... ---StarPilot


I Live In Google. I also work for the Government. That would really suck if the Government controlled Google. Google is my knowledge base. I use google to patch together the stuff I work on. Who is going to patch Google after the Government gets done with it. With out google, I would not know where else to turn. I am hooked.

To sue a search engine is like stabbing your self in the brain and then twisting the knife and scooping out that yucky white stuff that you think you don't need. Now why would you sue something that could benefit you. DumbAnimals. KaJoTra


The internet is free, any part of it that is controlled is not the internet. That because clear in 1990 when the chinese govt. was unable to stop student internet access via uunet during Tienemim Square incident.

Google is good, but there are hundreds of search engines. We can make our own search engine. http://altavista.com is my second choice --JimScarver


I Work with baer skins and knives, Google Groups is where I live. Old paradox pal or newer access vba syntax, scripts, code, structures, is at hand. If its not there, its cached. We could build a search engine, but I don't think my search engine would compare with these highlights http://www.google.com/press/highlights.html

I'm really not that worried about it. Its just that, I know the flaws of government management. --KaJoTra


LOL! So do I. I work for NASA, and they show all the flaws of government management!

Google gets sued, and a lot. They are being legally maneuvered into a company that provides listing with no ability to list what they want. That's LittleBrother's effect. BigBrother itself is making Google change what it points to, out of NationalSecurity. The Net is too big for any government or corporation to control, but by focusing on access points, they effectively gain control of Net.

China requires all Net access to go through it's government owned computer choke points. In theory, this lets them cut off anything they don't like (which they do and quite frequently), as well as monitor who is doing what. The down side for them is that it is too much data, making it impossible for them to capture and analysis all the data streams. However, they are working with the Internet savvy and MachineAgent savvy corporations to allow computers to analyze those data streams in real time and catch all those 'dissidents' for whatever reason.

So long as you maintain control of how people get out past you, you control their Internet. That's why many corporations are going to proxies and such. To control their workers use of the Net.

Remember... in Iraq and China (as well as many other places) it is illegal to own any means of communicating through channels that bypass the government. That is done to allow the government there to control what information its citizens have access to. And even the American government actively participates in the control of information.

There is a hacker saying that information wants to be free. That is incorrect. Some people like to share information. More like to control it. And others simply don't care. ---StarPilot


Is SemanticWeb proposed or used for decontrolling information (KnowledgeUnmanagement)? One humongous non-exclusive knowledge base? It seems akin to PubWan, which may be another monicker for the same thing.

You GetIt! Corporations are not likely to share information that gives them market advantage etc. but sharing knowledge has become simple and will be fruitful when tools like ObjectWiki and KnowledgeManagement systems support the semantic web. We can already create knowledge with XML and there are growing numbers of public domain and commercial tools to utilize it. Much information is already shared on the net and content addressable via search engines. The SemanticWeb will give it meaning and usefulness in applying it in our own way. Much will be proprietary but public stuff is significant and will grow exponentially as the semantic web becomes ubiquitous.

SemanticWeb playgrounds:


So Jim... how are the Knowledge faithful going to prevent the porn sites from defining (penis == physics) and (breasts == discounts), etc etc etc? Remember, there are more of those sites then anything else on the net. The second most common sites after that are: commerce. Which will have no problem saying things like (discounts == free) and (free == breasts). Who is the keeper of Semantics? If you don't have a moderated master list, then you end up with things like (puppies == very hard core copulating acts) and (Mickey Mouse == acts of unspeakable cruelty to animals). If SemanticWeb is OPEN, so that the maker of the page says "This is like/equivalent to this", then it's doomed to absolute failure due to the HumanAnimals duel motivations of greed and sex. That's something you can (unfortunately) bank on... ---StarPilot


SW-stack2.gif

The green areas are subject to abuse as you suggest star, the others are a matter of consensus and may be wrong also. The difference you can look at them and choose to accept them or not.

Here is a definition of logic. We can read it and verify whether we agree or whether we see changes needed. If we accept this logic ontology, then ontologies based on it, e.g. geneology may be built. We can read these and trust them or not as well. As we add ontologies and inconsistencies that arise can be automatically identified and resolved by showing incorrectness of an ontology or limiting the domain to which is can be applied.

You can bet that most people will write bad ontologies even with the best of intentions. If they simply produced text, it would be impossible to show, as text is ambiguous. Inconsistencies in N3 however are undeniable.

Revealing the knowledge structure of information to computers allows that knowledge to be diagrammed and tested and integrated in a consistent and logical manner. It won't ever be perfect but it is a tool that can empower people to utilize our collective knowledge better and help humans to make more sensible choices if they use it well. ---JimScarver


Jim, did you help them with their spelling for their documentation? :-D

So... in other words, as it is a consensus, the consensus on the web will be that (sex = everything else) then? Otherwise, to keep it useful, you'll have to patrol each and every formula/rule set... especially if it includes things like (if ChildA, but Child1 but isFamily1 and Child1 isFamily1 then ChildA isSibling Child1), as that is patently not true.

In the limited domains of corporate data monkeying, I can see it being very useful. (ie, HRdata.personelentry.socialsecuritynumber1 = Security.badgeentry.socialsecuritynumber1, therefore HRdata.personelentry = Security.personelentry). Outside of that, the general data miner will be screwed without being part of a moderated/verified equivelancy group.

And as for saying, this data column = that data column, therefore these are the same/equivalent owning object, well, there's already a bunch of languages for that.

Still, if it is bounded by "Consensus of the Greedy and Well Meaning Ignorant", it will be useful.

---StarPilot

The language of knowledge is the semantic web standard, not the knowledge per se, your personal preferences and "trusted" ontologies and ontology sources. The "Web of Trust" ought to be the web subset where we agree or prove to get the same answer each applying own selected "trusted" underlying ontologies. At this point you must accept all the ontologies employed in a proof. This effectively limits proofs to utilize standardized (concenconsensuslogies.

On WikiWorld you have N3 encoded knowledge above including a blatant errors. It is unlikely anyone will reference them or any other obviously dubious ontologies on purpose. Many personal rules acting on your behalf would not be shared (published) at all. Different groups will use diversely different ontologies.

I was playing, not too expertly, with an simple expert system http://synergy.xanthusinc.com/~jim/e2glite/wellness.html Try it! Its sometimes fun. You might get a good diagnosis and remedy, a laugh perhaps...

A great tool is Protege from Stanford. It is a wonderful ontology editor with all you need.

GroupWare, WorkFlow processes could automate much of the knowledge generation. We can make knowlege greation wizards. We will have tools that generate knowledge from most any application or database. Our machines can keep track of all sorts of stuf for us. Bottom up it is traditional computer programming, top down it is knowledge engineering. The rules developed are the new programming, not on any particular machine, but across the internet, enforcing our personal policies, filtering and auto responding to our our email, and easily communicate knowledge.

No longer will everyone have to collect the same knowledge independently. Collaborative knowledge will be going into high gear soon. Shared knowledge will help make our personal Intelligent Agent, pretty damn intelligent in acting on our behalf, empowering us to do more. If we get around to it :)

--JimScarver

Personal tools