“The future of search is to build the ultimate assistant”

Last week, one of my customers pointed me to an article on Search Engine Land, titled: “The rise of personal assistants and the death of the search box“.

Google’s Behshad Behzadi explains why he thinks that the convenient little search box that sits at the top right corner of nearly every page on the web, will be replaced. The article was written by Eric Enge and of course interpreted by him.

“Google’s goal is to emulate the “Star Trek” computer, which allowed users to have conversations with the computer while accessing all of the world’s information at the same time.”

I think that’s a great goal, and these things could be happening in the not to distant future. Of course we all now Siri, Cortana and Google Now, so this is not so hard to image. Below a timeline about the growth of Google.com:

2016-04-04 10_44_35-The rise of personal assistants and the death of the search box

At this time we are talking more and more to our computers. For most people it still feels weird, but “It’s becoming more and more acceptable to talk to a phone, even in groups.”

So… search applications are getting to know our voice and the way we speak is the way we search.

That demands a lot from search engines. They need to get more intelligent to be able to interprete the questions and match them with a vast amount of possible answers hidden in documents, knowledge basis, graphs, databases etc.
When having found possible answers, the search application needs to present the possible answers in a meaningful way ánd get a dialog going to be sure that it has interpreted the question right.

This future got me wondering about “enterprise search”. All this exciting stuff is happening on the internet. Search behind the firewall is lagging behind. The vast information and development power that is available on the internet is not available in the enterprise.
An answering machine needs to be developed constantly. Better language interpretation, more knowledge graphs (facts and figures) to drive connections, machine learning to process the queries, the clicks visitors perform, other user feedback etc.

The question is if on-premise enterprise search solutions can ever deliver the same experience as the solutions that run in the cloud. It’s impossible to come up with a product that installs on-premise  and has the same rich features that Google is delivering online. One could try, but then it’s the question if the product can keep up with the improvements.

So with the “death of the search box”, will this also lead to “the death of the on-premise search solutions”? Google is dropping support for their on-premise search solution, the Google Search Appliance, for a reason. The way to the cloud and personal assistents is driving that.

Posted in: Marktontwikkeling, Technologie by Edwin Stauthamer No Comments

Queries are getting longer. What’s the impact?

Recently I’ve been working on a project for a Dutch financial company. It concerns the search functionality of there website. The business case is clear: support self service and getting answers to common questions to take the load (and costs) of the call center.

Of course we are taking search log analysis VERY seriously because there is much to be learned from them.

Some statistics: 400.000+ user queries per month, 108.00+ “unique” queries, Top 5000 queries cover only 7% of the total queries. The long tail is big.
So focussing on the top queries will only cover 7.500 of the 108.000 queries.
68% of the queries have 3 or less terms. When we remove the “stopwords” the queries with 3 terms or less are 78%.

We did some relevancy testing (manually, so very time consuming) and we know that the queries with 2 or 3 terms perform quite good.
The analysis of a part of the long tail helps us identify stopwords and synonyms. So far… so good.

These numbers made me more curious. I want to know what the trend is on the number of terms used in formulating queries. Are people “talking” in a more natural way to search engines? (See: Longer Search Queries Are Becoming the Norm: What It Means for SEO) . I am trying to find more resources on this, so let me know if you know about them.

Why is this important?

A lot of search engines work “keyword based” when trying to find relevant results. They look if the keywords appear in a document and if so, it becomes relevant. When combining those keywords with an “AND”, the more terms you use, the less results you will find. If there are a lot of “meaningless” terms in the query, the chance that you will find what you are looking for becomes less and less. Stopwords can help out here, but one cannot cover all variants.
OK, you say, “Why don’t you combine the terms with an ‘OR’?”.  Indeed that will bring back more possible relevant documents, but with the engine we use (Google Search Appliance), the relevancy is poor.
The issue here is referred to with the concepts “Precision” and “Recall” (see: Wikipedia “Precision and Recall“).

When coping with longer queries – in natural language – the search engine needs to be smarter. The user’s intent has to be determinated so that the essence of the search request is revealed. That essence can then be used to find relevant documents/information in unstructured content.
Instead of (manually) feeding the search engine with stopwords, synonyms etc., the search engine needs to be able to figure this out by itself.

Now I know that the “search engine” is something that ignorant users (sorry for the insult) see as one “thing”. We as search consultants know that there is a lot going on in the total solution (normalization, stemming, query rewriting etc.) and that a lot depends very much on the content, but still…. the “end solution” needs to be able to cope with the large queries.

Bottom line is that search solutions need to be able to handle short queries (a few terms) as well as complete “questions” if the end user is using more and more terms.
What current products support that? We talked to a couple of companies that say that they support “natural language processing”. A lot of times this comes down to analyzing questions that are asked to the call center and creating FAQ’s that match the questions so that a search will come up with the FAQ. Although effective, that’s not completely the idea. This demands a lot of manual actions, while the search has to be “magic” and work on the existing content without changes.

My customer is now looking at IBM Watson to solve their long term plans. They want to make search more “conversational” and support the queries on the website as well as a “virtual assistant” that acts like a chat.

Will search become more conversational? Will users type in their queries as normal questions? How will search vendors react to that?

Posted in: Kennis, Technologie by Edwin Stauthamer 2 Comments , ,

Replacing a search appliance with… a search appliance?

With the news on the Google Search Appliance leaving the stage of (Enterprise) search solutions – of which there is still no record on the official Google for Work Blog – there are a couple of companies that are willing to fill the “gap”.

I think that a lot of people out there think that the appliance model is why companies choose for Google. I think that’s not the case.

A lot of people like Google when they use it to search the Internet. That’s why I hear a lot of “I want my enterprise search to be like Google!“. That’s pretty fair from a consumer perspective – every employee and employer are also consumers, right? We enterprise search consultants – and the search vendors – need to live up to the expectations. And we try to do so. We know that enterprise search is a different beast than web search, but still, it’s good having a company that sets the bar.

There are a few companies that deliver appliance models for search, namely Mindbreeze and Maxxcat. They are hopping on the flow and they do deliver very good search functionality with the appliance model.

But… wait! Why did those customers of Google choose the Google Search Appliance? Did they want “Google in a Box”? I don’t think so. They wanted “Google-like search experience”. The fact that it came in a yellow box was just “the way it was”. Now I know that the “Business” really liked it. It was kind of nifty, right? The fact was that in many cases IT was reluctant.

IT-infrastructure has been “virtualized” for years now. That hardware based solution does not fit into that. IT wants less dedicated servers to provide the functionality. They want every single server to be virtualized so that uptime/fail-over and performance can be monitored and tuned with the solution that are “in place”.

Bottom line? There are not many companies that choose for an appliance because it is an appliance. They choose a solution and take it for granted that it’s an appliance. IT is very reluctant towards this.

I’ve been (yes the past tense) a Google Search Appliance consultant for years. I see those boxes do great things. But for anything that could not be configured in the (HTML) admin interface, one has to go back to Google Support (which is/was great by the way!). There’s no way for a search team to analyse bugs or change to configuration on a deeper layer than the admin console.

So… If you own a Google Search Appliance, you have enough time to evaluate your search needs. Do this consciously. It may well be that there is a better solution out there, even open source nowadays.

 

 

Posted in: Opinie, Technologie, Vendors by Edwin Stauthamer No Comments

Definition of “Federated search”

Funny when reading the book of Martin White “Enterprise search – second edition” about “Federated search”.

He defines “Federated search” as:

…which is an
attempt to provide one single search box linked to one single search application that
has an index of every single item of information and data that the organization has
created.

I have been working in the field of search for a couple of years now. When talking about federated search I do not see this as “one search box with all the information (structured and unstructured) stored in one single index”. The fact is that some search vendors/solutions even have something called “search federators”. I think about HP Autonomy “federation engine” and the “One Box” feature of the Google Search Appliance (now discontinued).

I think of federated search just as the opposite of that. In a “federated search” environment the essence is that all information is NOT stored in one big index.

Since there are content systems of which you cannot get the info in your “single index” (or only with high costs and security issues) and the fact that some systems have good search functionality by itself, there is a different way of connecting content in “information silo’s”.
The goal is to present the user a complete insight in every piece of possible relevant information. For that to work, all the info doesn’t have to be stored in one single index (the “Hall” approach that Martin mentions and I agree that this does not have to be the goal and even is not realistic).

Fed_searchInstead of that, the search application can also reach out to different (search) systems at once providing a query that is distributed over that (search) systems.
The results doesn’t have be presented in one single result list. Intelligent and good designed search user interface (or maybe more like a search “portal” in this case) can present the results from the different sources next to each other, using “progressive disclosure” to peruse results from one (search) system one at a time, but in a unified interface.

Wikipedia agrees with me on this:

Federated search is an information retrieval technology that allows the simultaneous search of multiple searchable resources. A user makes a single query request which is distributed to the search engines, databases or other query engines participating in the federation. The federated search then aggregates the results that are received from the search engines for presentation to the user

Of course, Federated search has some very serious disadvantages, but mentioning them is not the goal of this article.

So in my opinion an “Enterprise search” solution can/will consist of a combination of a central index (that will hold as much info as is economically and technically possible) and federated search to other (search) systems to complete the 360 view of all information in an organization.

I just want to get the definitions straight.

 

Posted in: Kennis, Technologie by Edwin Stauthamer No Comments

Goodbye Google Search Appliance, we are going to the cloud!

The History

It was the year 2005 when Google decided that they could use their superior search to make information in enterprises/behind the firewall searchable.

That year Google released the Google Mini. A cute little blue server, pre-installed with the software from Google.ScreenHunter_78 Feb. 14 16.33

The Mini could index up to 300.000 documents. The functionality was limited, but great in crawling webbased content, just like Google.com did. The Mini was mainly used to index intranets and public websites. That was in the time before Google introduced Site Search as a product. The Mini did not have features like facetting and connectors to crawl documents from sources other that websites or databases.

ScreenHunter_78 Feb. 14 16.57Google must have realized that the Mini could not fulfill the Enterprise search demands (many different content sources, need for facetting, changing the relevance, need coping with millions of documents etc.) so they released the Google Search Appliance.

The first versions of the GSA were very similar to the Mini. They added some connectors, facetting, morriring and API’s to manage the appliance.
One important feature was the ability to scale to millions of documents, distributed over several appliances. The limit of the number of documents one appliance could index was 10 million.
The proposition of the GSA shook up the enterprise search market. Management of the GSA was easy and so enterprise search became easy. Or so at least it seemed. “Google knows search and now it is bringing their knowledge to the enterprise. We can have search in our business as good as Google.com“. NOT so fast, there is a big difference in search on the web and search in the enterprise (read “Differences Between Internet vs. Enterprise Search“).

In 2012 Google pulled back the Mini from there offerings and focussed on selling more GSA’s and improving the Enterprise capabilities. I assume that the two are not that different at all and there could be a lot of more money to be made with the GSA.

After that time more energy was put into improving the GSA. After version 6 (the Mini stopped with version 5) came version 7 with more connectors and features like Wild Card search (truncation with ‘*’), Entity Recognition, Document Preview (Documill) etc.. Minor detail is that the OOTB search interface of the GSA was never improved. It reflected Google.com back in 2005.

The last years it became clear that Google didn’t know what to do with this anomaly in it’s cloud offerings. The attention dropped, employees were relocated to other divisions (mainly Google Apps and Cloud) and the implementation partners were left to their own when it came to sales support. There was not much improvement in adding features.

Beginning 2015 Google re-vamped the attention and dedicated more resources to the GSA again. It was clear (at that time) that the profits for the GSA are good and could even be better. Better sales support was promised to the partners (global partner meetings) and sales went slightly up. In 2015 version 7.4 was released with some small improvements but with a brand new connector framework (Plexi adaptors). Several technology partners invested in developing connectors to support this new model. Small detail was that the new connector framework relied heavily on the crawling by the GSA and the adaptors beeing more like a “proxy”. The old connector framework was pretty independant of the GSA by sending full contents of documents to the GSA. (since the open source character of the connectors other companies started to use it in theire own offerings, like LucidWorks using the SharePoint connector).

I’ve been working with the GSA for a long time a I must say that the solution made a lot of customers happy. The GSA really is easy to administer and the performance and stability is near to perfect.

On Thursday February 4th 2016 Google sent an e-mail to all GSA owners and partners stating that the GSA is “end-of-life”. Google will continue to offer support and renewals until 2019, but no innovation on the product will be done anymore. This came as a blow to the existing customers (who have invested a lot of money very recently) and the partners.

Google doesn’t have an alternative for enterprise search yet. It must be working on a cloud offering for that. It will certain be able to search through Google Drive (duh..) and some cloud services like Sales Force, DropBox, Box etc. since the data for those applications already reside in the cloud.

Also see the article “Why Google’s enterprise search is on a sunset march to the cloud“.

Observations

  • Google is a cloud company, it doesn’t like you to have information in on-premise or private cloud solutions
    Supporting an on-premise solution is “strange” for Google.
  • Enterprise search is hard. Slapping an appliance on to intranets and websites doesn’t cut it.
    Enterprise search is not Web search. So many other sources and different relevancy models.
  • The license model of the GSA runs into problems with a large number of documents/records.
    Let alone when you want to combine structured info from databases.
  • Delivering a search experience like Google.com in the enterprise is not possible out-of-the-box.
    Google.com has a lot of “application logic” and call-outs to other sources. The thing we see is not only the search engine working.
  • The GSA is a “relevancy machine”. It does not work well with structured content.
  • To be able to support enterprise search the vendor need to have many connectors to tap into many different content systems.
    Google has support for 5 content sources out-of-the-box/provided by Google. Other connectors are delivered by partners and need additional investments/contracts.
  • To be able to support disperate content systems with different metadata models the search engine needs to have metadata mapping functionality.
    The GSA always relied on the quality of content and metadata in indexed content systems. That is not the reality.
  • Also see the article “Why Google’s enterprise search is on a sunset march to the cloud“. With a slightly different take on the subject.

Conclusion

Google has proven not to be an enterprise search solution providor. It tried with the Google Search Appliance but it (sadly) failed. The GSA was a good product that fits wel in many areas. But Google is a cloud company an does not have other on-promise solutions.
Google must have come to the conclusion that enterprise search is hard and that the investments doesn’t stand up to the profit. Google doesn’t expose numbers on revenue on GSA deals, but it must be a small part of their revenue.

The GSA lacks some features that would make it “enterprise ready” and the number of feature requests would give them a work load of years to catch up with the current vendors.

Google is a cloud born company that thinks in large volume of users. Their offerings are all cloud based and focus on millions of users paying a small amount of money on a use base. When operating on that scale minimal margins are OK because of the volume.
Enterprise search doesn’t work that way. The license model of the GSA (based on number of documents) holds back opening up large amounts of documents (but that’s not only the case for the GSA. Other search vendors also have that model) .

Having said that, there a couple of search vendors that are ready to step up and are going to use the retraction of Google on the enterprise search market as their “Golden egg”:

  • Mindbreeze
    Offers an Enterprise Search Appliance. They even offer a solution to migrate from GSA to Mindbreeze Inspire.
    The 300+ connectors could be the reason to switch over.
  • SearchBlox
    Long term competitor of the GSA. Offer a similar experience but with more functionality and less cost.
  • LucidWorks Fusion
    The commercial party behind Solr. Solr is the most supported open source search engine in the world with a lot of features. Fusion offers connectors, manageability and data processing at index time to enable advanced search experience.

 This Blog reflects my personal opinions and not that from my employer

Posted in: Marktontwikkeling, Opinie, Technologie, Vendors by Edwin Stauthamer 1 Comment

Andere koers voor StateOfEnterpriseSearch

Aan het einde van 2015 is de tweede editie van het boek “Enterprise Search” uitgekomen. Dit boek is geschreven door Martin White, een gerespecteerd lid van de enterprise search community, schrijver van boeken over dat onderwerp en begenadigd spreker op vele events wereldwijd.

Ik heb af en toe contact met Martin, via twitter, maar ook “in real life” tijdens seminars. Onlangs heb ik nog in een expert panel gezeten tijdens het Enterprise Search Europe event in Londen. Dit evenement werd gemodereerd door Martin.

Martin noemt deze site in de lijst met bronnen die informatie publiceren over enterprise search. Dit heeft mij doen beseffen dat mijn publicaties ook internationaal worden gewaardeerd. Een mooie opsteker.

Maar… de meeste van mijn blogs zijn in het Nederlands. Niet echt leesbaar voor een internationaal publiek dus. Om beter deel te kunnen nemen aan de internationale stroming op het gebied van enterprise search is het beter als ik in het Engels ga schrijven.

Deze beslissing is niet makkelijk. Mijn activiteiten vinden over het algemeen in Nederland plaats (met af en toe een uitstapje naar België). Ik wil uiteraard ook goed gevonden worden in Nederland. Ik ga er echter vanuit dat nederlanders veelal zoeken op termen op het gebied van enterprise search waardoor mijn site toch wel wordt gevonden.

So… this will be the last article in Dutch on this website.

Ik blijf in het Nederlands bloggen op Blog-IT. Daar zullen vertalingen van mijn blogs op deze site verschijnen.

Posted in: WS Nieuws by Edwin Stauthamer No Comments

Enterprise Search – Geschiedenis herhaalt zich

Twee decennia geleden was er een grote aanbieder van zoekoplossingen voor bedrijven: Verity. Verity leverde een oplossing voor het doorzoekbaar én vindbaar maken van álle informatie binnen een organisatie, onafhankelijk van welke bron dan ook. Deze oplossing is ook bekend onder de noemer “Enterprise Search”.
Autonomy heeft Verity begin jaren “00” overgenomen en enkele jaren geleden heeft HP Autonomy overgenomen.

Sinds die tijd zijn er vele aanbieders van “enterprise search” toegetreden tot de markt van “enterprise search”:

  • Coveo (nu IBM)
  • Endeca (nu Oracle)
  • Exalead (Nu Dassault systèm)
  • LucidWorks (Fusion)
  • en nog meer

In mijn tijd als search consultant heb ik vele oplossingen mogen implementeren en heb ik de ontwikkelingen van verschillende – ook nieuwe – aanbieders gevolgd.

Iedere Enterprise search oplossing heeft dezelfde aandachtspunten:

  1. Hoe verkrijg je de informatie uit verschillende systemen in de index (crawling, feeding, connectoren)
  2. Hoe zorg je ervoor dat de gebruikers van de zoekoplossing alleen die resultaten terug kan vinden die je ook alleen mag zien (conform de rechten die in het bronsysteem gelden).

De “oude” oplossingen zoals Autonomy hebben vele connectoren om informatiesystemen aan te sluiten compleet met oplossingen voor permissies, updates, schaalbaarheid, beschikbaarheid etc.

De “nieuwe” aanbieders lopen tegen dezelfde problemen aan die de “oude” aanbieders al hebben opgelost. Hoe kan je vaststellen welke user welk resultaat mag zien? Wat als een bronsysteem niet beschikbaar is? Verwijder je dan gewoon alle content die niet meer beschikbaar is omdat een connector er niet mee bij kan komen?

Ik ben afgelopen week tegen zo’n probleem aangelopen. In een omgeving waar we de oplossing van Google (Google Search Appliance (GSA) + Adaptor voor SharePoint) hebben geïmplementeerd bleek de adaptor (=connector) niet meer beschikbaar te zijn. Omdat deze adaptor niet meer beschikbaar was kon de GSA ook niet meer bij die bron komen.

Het gevolg? Alle documenten (4 miljoen) werden uit de index verwijderd. Het duurt ongeveer 2 weken om deze content opnieuw te vergaren. Het resultaat ofwel de gebruikerservaring kan je je voorstellen.

Het verbaast mij om te zien dat alle aanbieders van Enterprise search oplossingen iedere keer het wiel opnieuw moeten/willen uitvinden omdat ze denken dat zij het beter/anders kunnen doen. Het “not invented here” syndroom lijkt hierbij te prevaleren. Dit in plaats van het (her)gebruiken van wat anderen al hebben bedacht en daarop voor te bouwen.

Uiteraard begrijp ik het commerciële gedeelte. Ik begrijp alleen niet hoe men (lees: nieuwe aanbieders) een nieuwe oplossing wil maken zonder gebruik te maken van de kennis en oplossingen die al aanwezig zijn?

Een ezel stoot zich toch ook niet aan dezelfde steen?

Dit onderstreept ook het belang van het betrekken van een  expert op het gebied van “Enterprise search” wanneer je je als organisatie wil verdiepen in de implementatie zoekoplossingen.
De aanbieders van zoekmachines belichten vaak maar enkele kanten van een totale oplossing.

Posted in: Kennis, Marktontwikkeling, Opinie, Vendors by Edwin Stauthamer No Comments

ContentCafé meeting over “Search”

Op 8 april organiseert ContentCafé zijn 11e sessie. Dit keer is het onder werp “Search“.

Toen Google in 2013 5 minuten offline was, daalde het aantal page views op het internet met 40%. We navigeren het web via zoekmachines: elke maand stellen we met z’n allen elke 60 seconden zo’n 2.66 miljoen vragen aan Google’s ondoorgrondelijke algoritmes. Het is dus niet zo gek om te denken dat navigatie- of interactieproblemen ook met search ‘opgelost’ kunnen worden. Als je argumenten nodig hebt om aan te tonen dat dit niet werkt, lees dan dit artikel.

Maar wanneer werkt search dan wel en hoe weet je of een zoekmachine goed functioneert? Hoe kun je input leveren voor implementatie? Wat is semantisch zoeken, wat zijn de praktische mogelijkheden en hoe kun je dat zo inzetten dat jouw bezoekers niet eens meer hóeven te zoeken?

De elfde editie van het ContentCafé vindt plaats op woensdag 8 april om 19 uur Performance Solutions in Hoofddorp. We laten je graag verdwalen en je weg terugvinden in de wereld van search, semantiek en algoritmes.

Edwin Stauthamer zal spreken over de praktijk vanuit zijn ervaring met het adviseren over en het implementeren van zoekoplossingen voor bedrijven.

Posted in: Geen categorie by Edwin Stauthamer No Comments

Enterprise Search adoptie wordt tegengehouden door licentiekosten

In mijn vele jaren als consultant “Enterprise Search Solutions” heb ik vele succesvolle en minder succesvolle implementaties mee mogen maken.

Let wel, ik heb het hier over echte “enterprise search” oplossingen: Doorzoek- en bruikbaar maken van alle binnen een organisatie aanwezige informatie voor alle medewerkers:
http://en.wikipedia.org/wiki/Enterprise_search
Het gaat hier dus niet om specifieke search oplossingen zoals voor Call Centers, R&D en websites.

De afgelopen tijd loop ik steeds vaker tegen het probleem van de licentiekosten bij het uitrollen van enterprise search oplossingen aan.

Een organisatie besluit om een zoekproduct aan te schaffen op basis van een bepaalde “scope” of business case. Daarbij wordt de keuze voor de investering gebaseerd op die scope.
Na de initiële implementatie – die vaak succesvol is – ontstaat een vraag naar “meer”: meer bronnen, meer documenten, meer gebruikers.

Op dat moment loopt de organisatie echter tegen de grenzen van de initiële licentie aan. Deze licentie van commerciële software is gebaseerd op het aantal servers waarop de software mag draaien, het aantal CPU’s dat gebruikt mag worden of het aantal documenten dat de index mag bevatten.

En daar gaat het fout. Ondanks het feit dat de zoekoplossing veel potentieel heeft en meer informatie zou kunnen ontsluiten voor meer medewerkers, wordt besloten om niet uit te breiden.

De reden hiervoor is meestal de kosten die hiermee zijn gemoeid. Niet de kosten van consultants of ontwikkelaars, maar de kosten van de licentieverhoging die nodig is om meer documenten te kunnen indexeren of het aantal gebruikers (lees zoekopdrachten) te kunnen bedienen.

Het licentiemodel van de Google Search Appliance (GSA) is daar een voorbeeld van. Dat model is uitsluitend gebaseerd op het aantal documenten dat doorzocht kan worden. Het instapmodel is gebaseerd op 500.000 documenten. Dit lijkt veel als we het hebben over een website. Dit aantal documenten is echter al snel veel te weinig als we het hebben over een filesysteem, DMS of databases.
De GSA heeft zeer veel potentieel als het gaat om het voorzien in de informatiebehoefte van medewerkers. De relevantie is zeer goed en de initiële configuratie niet complex. Als we het echter hebben over alle informatie en documenten binnen een organisatie, gaat het al snel over miljoenen “documenten”. De kosten lopen dan in de honderdduizenden euro’s en soms zelfs in de miljoenen. Dit geldt ook voor aanbieders zoals Exalead, HP/Autonomy en Oracle/Endeca.
Voor grote organisaties (meer dan 1000 medewerkers) is dit wellicht nog te rechtvaardigen. Voor het “middensegment” – bedrijven tussen de 50 en 500 medewerkers – is dit al snel niet meer op te brengen. We moeten dit natuurlijk afzetten tegen de business case – hoeveel meer kan ik verdienen/hoeveel kan ik besparen – die voor “enterprise search” zeer moeilijk hard te maken is.  De kosten van consultants en ontwikkelaars zijn vaak een fractie hiervan.

Enterprise search, het bieden van goede – contextuele – resultaten, doorzoekbaar maken van ALLE bedrijfsinformatie, integratie in werkprocessen, vergt veel aandacht en inspanning van specialisten. Deze specialisten kunnen oplossingen bieden voor complexe zoekvragen en user interfaces die zijn gericht op het optimaal bedienen van verschillende processen.
Deze oplossingen kunnen echter vrijwel nooit het niveau bereiken dat nodig is om échte bedrijfsbrede problemen te adresseren, vanwege de licentiekosten van de onderliggende enterprise search producten.

We zien dat organisaties – om het probleem van de licentiekosten op te lossen – steeds vaker uitkijken naar open source oplossingen. Deze oplossingen zijn vaak zeer geschikt om een specifiek probleem op te lossen. Denk hierbij aan Big Data, Data Discovery, Search Based Applications en E-Commerce toepassingen.
Enterprise search kent echter andere aspecten die deze open source oplossingen niet goed kunnen adresseren. Denk hierbij aan enterprise securitymodellen en connectoren voor verschillenden enterprise contentmanagementsystemen. Laat staan de zeer gebruiksvriendelijke beheeromgevingen die de commerciële producten bieden.  In dat geval moet er zeer veel tijd en energie gestoken worden in de totale oplossing waardoor het geheel via maatwerk aan elkaar komt te hangen, zonder een solide basis voor toekomstige uitbreiding te bieden én zonder een duidelijke richting van de partij (welke partij?) achter het open source product.

Naar mijn mening moeten de commerciële aanbieders van Enterprise search oplossingen eens goed kijken naar de licentiemodellen. Willen zij het probleem van het niet kunnen vinden en hergebruiken van informatie binnen bedrijven écht oplossen of willen zij – op korte termijn – zoveel mogelijk verdienen en dan accepteren dat het al snel stopt, vanwege de kosten die een “Enterprise wide” oplossing met zich meebrengt?
De kosten voor een goede oplossing moeten niet zitten in het gebruikte product – waar er meerdere van zijn die vrijwel hetzelfde kunnen – en de licenties, maar in de zorgvuldig overwogen ontwikkelinspanning die nodig is om de totale oplossing meer waarde te laten genereren.
Commerciële aanbieders kunnen daarbij een voorbeeld nemen aan het “subscription” model van LucidWorks (de commerciële organisatie achter Solr), waarbij het niet meer gaat om aantal documenten, servers of CPU’s.

Wij als consultants op het gebied van Enterprise Search willen goede oplossingen bieden voor bedrijven van welke omvang dan ook, maar we worden beperkt door de licentiekosten van commerciële producten.

 

Posted in: Opinie, Vendors by Edwin Stauthamer No Comments

Simpele dingen die we kunnen doen om “Tacit” knowlegde inzichtelijk te maken (Engels)

One Common Model

Different authors have come back to a general concept along these lines:

  • Instill a knowledge vision
  • Manage the conversations
  • Mobilize knowledge activists
  • Create the right context for knowledge creation
  • Globalize local knowledge

At AnswerHub, we try to work within all these areas on a regular basis, although the idea of “globalizing local knowledge” — essentially, making sure that certain bits of information aren’t tied up in one person’s head/e-mail — is one of the true value-adds of our software.

All the steps above are crucial, although the terminology can feel a little “business-school-y” from time to time. What exactly does it mean to “instill a knowledge vision,” for example? How do you “mobilize knowledge activists?” Let’s see if we can break this down into some day-to-day examples.

Simple Things We Can Do To Uncover Tacit Knowledge

  1. Set one meeting a week aside as Discovery Day: Have three people picked beforehand; their goal is to do five-minute presentations (no longer than that) on an aspect of work that isn’t part of the day-to-day grind but really intrigues them. After the 15 minutes of presenting, the other participants in the meeting go to one of the three presenters (whoever interested them most) and the presenters take their colleagues and explain the idea a bit more in depth. This is a way to promote the idea of learning, looking outside the day-to-day, and fostering discovery among employees.
  2. Set one meeting at the beginning of the month as a Gaps meeting: If you want to avoid this being a meeting, you can turn it into a Shared Doc/Tumblr/etc. Essentially, everyone is supposed to list some of the biggest knowledge gaps that prevented them from doing their best work in the previous month, as well as 2-3 new things they learned in the work context. If everyone contributes in the first five days of the month, you now have a picture of your biggest knowledge gaps — as well as what you’re doing well. You can plan for the coming month off of that!
  3. Lucky Lottery Partnerships: At the beginning of a six-week cycle, bring clusters of 25-50 employees together and draw them off in a lottery into groups of six-eight. Within the next six weeks, the newfound groups need to share new types of knowledge and demonstrate how they did so; this can be weekly meetings, coffee dates, a poster or white paper, or something else. It can feel like more work — that’s where you need top-down buy-in — but in reality, it helps cement a culture where pursuit of learning / new knowledge is paramount. That type of culture will thrive long-term.
  4. Pulse checks: The idea here is to quickly (brevity is a key) figure out how your people are most comfortable learning. Would they rather learn from peers, from experts? From SlideShares, from videos? In quick bursts or day-long seminars? Remember: a key differentiator between top companies (in terms of engagement) and low-ranking companies — and this is at any size — is the access to and context around new opportunities to learn/grow. Your employees want that to be provided, so you need to figure out what makes them learn the best.
  5. Process Software:The ultimate goal with tacit knowledge capture is taking local knowledge — only Bob knows how to do this, so when Bob is out of office or Bob takes another job, we’re doomed — and making it global knowledge. Software, like AnswerHub, can be a powerful tool for doing just that. The key elements therein are:
    • Making sure Bob isn’t threatened and still realizes his value
    • Figuring out the most comfortable way for everyone else to learn Bob’s skills
    • Setting up a few different modalities/programs for Bob’s knowledge to be disseminated
    • Creating organic communication channels where people can ask Bob questions
    • Having a space where the shared knowledge is now physically shared.

In a five-step process (with help from technology), you just went from information silos and knowledge being contained locally to shared knowledge throughout your organization. It’s hard, but it’s definitely achievable

- See more at: http://answerhub.com/article/5-steps-to-tapping-into-your-tacit-knowledge

Posted in: Kennis, Opinie by Edwin Stauthamer No Comments