SINN
         




    Abstracts

    Opening | Session 1 | Session 2 | Session 3 | Session 4 | Session 5 | Session 6 | Session 7

    Opening


    Maximizing university research impact through self-archiving

    Stevan Harnad, Universite du Quebec a Montreal

    Abstract:
    (1) Universities need to adopt a self-archiving policy -- an extension of their existing "publish or perish" policy to "publish with maximal impact". A potential model for such a policy can be found at http://www.ecs.soton.ac.uk/~harnad/Temp/archpolnew.html along with (free) software for creating a standardized online university CV, linking all entries for peer-reviewed articles to their full text self-archived in the university eprint archives: http://paracite.eprints.org/cgi-bin/rae_front.cgi

    (2) University libraries need to help with the first wave of self-archiving, doing "proxy" self-archiving for those researchers who feel too old, tired, or busy to do the few keystrokes per paper that are involved. http://www.ecs.soton.ac.uk/~harnad/Tp/resolution.htm#7.3

    (3) Research funding agencies such as NSF or NIH (US), HEFCE or EPSRC (UK), NSERC, CFI or FRSQ (Canada), or CNRS or INSERM (France) need to encourage self-archiving as part of the normal research cycle, requiring not only that the research findings be published, as they already require, but that their visibility and usage be maximized by making them openly accessible through self-archiving. http://www.ariadne.ac.uk/issue35/harnad/

    (4) Scientometric performance indicators and analyzers such as http://citebase.eprints.org/cgi-bin/search -- rather like google, but based on citation links instead of ordinary links -- need to be created and used to demonstrate, monitor, measure, evaluate and reward the maximization of research impact through open access. Free online accessibility increases citation impact by 336% http://www.neci.nec.com/~lawrence/papers/online-nature01/

    (5) Journals need to support self-archiving by modifying their copyright transfer or licensing agreements to encourage self-archiving (as 55% of them already do, with most others agreeing on a per-paper basis if asked: so ask!): http://www.lboro.ac.uk/departments/ls/disresearch/
    romeo/Romeo%Publisher%Policies.htm

    top back to programme


    Session 1:
    The Future of PhysNet


    PhysNet and its Mirrors: The Project SINN

    Michael Hohlfeld, Institute for Science Networking Oldenburg GmbH

    Abstract:
    The aim of the project SINN - Suchmaschinennetzwerk im Internationalen Naturwissenschaftlichen Netz - was to enhance the distributed information system PhysNet (www.physnet.net) to a fast and secure service by setting up mirrors of the PhysNet service all over the world and building a network scientific search engines within this system.

    An overview on SINN and PhysNet is given, presenting the the motivation and tasks within these projects as well as the current development status of the mirror network and its usage.
    Some experiences about combining distributed work force in one service are discussed as well.

    top back to programme


    SINN and XQuery: Results and Implementation

    Thomas Severiens, Institute for Science Networking Oldenburg GmbH

    Abstract:
    To build an open, standardized, load-balanced and scalable network of scientific search engines, this is the goal of SINN. Open means, that the software should be free to use for science and education. The network should be using standardized protocols for communication between the participants. A first implementation of this network using W3C's XML-Query standard is presented. This network allows searching across several data-repositories, e.g. Harvest brokers even on high loaded networks. Data-Providers can register themselves to answer queries or unregister, when load has reached their maximum level. Off runtime of answering queries data-providers can share their indexes,increase redundancy. This networks allows the user to use the full functionality of XML-Query. A short overview over this programming language-like query language is also given.

    top back to programme


    Search Engine Tree in Hungarian PhysNet

    Jozsef Kadlecsik and Kati Szalay, KFKI RMKI Computer Networking Center

    Abstract:
    Six Hungarian physics institutions plan to cooperate in building up a search engine tree by the coordination of the Computer Networking Center of KFKI RMKI. The design relies on the mnogosearch engine with a gateway on the top, which connects the tree directly to PhysNet.

    top back to programme


    Session 2:
    Networking of Configurable Robots, Summarizing and Brokerage


    Development status of Harvest

    Kang Jin Lee, harvest.sourceforge.net

    Abstract:
    Description of recent changes in Harvest with focus to:

    • Internationalization efforts
    • Transition from SOIF to XML
    • Object storage system
    top back to programme


    MathNet, MPRESS and its Searchengines

    Judith Plümer, University of Osnabrück

    Abstract:
    Progress in science bases on the availability and accessibility of the state of the art. In mathematics and related sciences this state of the art is reflected in form of preprints that are nowadays published electronically. Since every author becomes his or her own publisher there is a lack of organizing accessibility of the material.
    The same problem of missing organization we find when searching for people or teaching material in a specific area on the Web. MathNet deals with this problems and tries to solve them in a user driven approach. The talk briefly introduces MathNet and explains techniques and developments on the basis of MPRESS - one of the MathNet services: The Mathematics PREprint Server System (MPRESS) is a service offering an index of mathematical preprints electronically published all over the world. Referencing more than 50.000 documents residing on more than 100 WWW servers MPRESS is the largest index of mathematical preprints worldwide. The index data of the preprints are enriched by the use of metadata which improve the quality of retrieval drastically.
    The development of metadata schemes and encoding forces a transition from plain metadata to RDF as recommended by W3C.
    The talk sketches the system MPRESS and the related development in the area of metadata. The problems that these developments cause for a system like MPRESS as an example of a MathNet Service are discussed and the transition to a new metadata scheme is presented as well as the improvements that result from this transition.

    top back to programme


    Harvesting Webpages that contain Mathematical Information

    Winfried Neun, ZIB Berlin

    Abstract:
    The aim of the Math-Net project (under the aegis of the International Mathematical Union) is to build up a pool of high quality information on mathematical research and mathematicians worldwide. In the framework of this project we at ZIB are harvesting pages with mathematical contents from the Web. These pages contain, besides simple text information, mathematical formulae or keywords. These formulae are traditionally encoded in LaTeX, but with the emerging new standards like MathML, OpenMath, OMDoc we have to encounter more webpages that use the new standards. Our goal is to retrieve as much semantic information as possible independent of the encoding style used for formulae in a mechanized way by providing extensions to the Harvest software. We finally want to classify the mathematical information in the webpage based on the type of formulae included and completed by mathematical keywords. In this talk we discuss some problems with the automatic detection of semantics which are caused by the encoding schemes. One example is the well-known encoding in MathML, where two different encoding types serving different needs of the users as well as mixed types are defined. Some of the attempts we make to overcome these problems are based on heuristics.

    top back to programme


    Construction of Physics Vocabularies from OAI Data and its Application to Linkclassification for the PhysDoc Harvester

    Michael Schlenker, Institute for Science Networking Oldenburg GmbH

    Abstract:
    Using the metadata available via the OAI-PMH a vocabulary of physics terms was automatically created by applying statistical and heuristic filtering and extraction methods to the descriptions of physics resources. The primary target of the filtering and extraction process were phrases of physical relevance to be used later in query expansion and classification. One user for the physics keyword and phrase lists gathered is the SHRIMPS http robot built by Svend Age Biehs. The robot searches known servers which are listed in the PhysDep service and tries to find pages on those sites where physics publications are listed. This complements the shallow depth crawling, as done by the harvest engine without SHRIMPS, with a more in-depth look at specific pages deeper in the page hierarchy. SHRIMPS uses a mix of heuristical and hand-crafted rules, combined with pattern matching on keyword and phrase lists to determine if a webpage contains relevant data and is worth harvesting. This increases the number of documents in the index, but limits the pollution with non relevant documents.

    top back to programme


    Session 3:
    Distributed Open Document-Archives


    Research Challenges to Semi-Automatically Enhance Quality in Distributed Open Archives

    Edward A. Fox, Virginia Tech

    Abstract:
    The "Open Archives Distributed" project proceeds at University of Oldenburg and at Virginia Tech, funded by DFG and NSF, respectively. It concerns managing both physics information and electronic theses/dissertations in Germany and USA, building upon the Open Archives Initiative. In this talk the focus is on recent work at Virginia Tech, especially the "Reengineering PhysNet in the uPortal Framework" thesis work of Ye Zhou, which offers a number of new services: PACS browsing, PACS automatic classification (using SVM together with extensive training), Physics employment service, XML schema-driven web-based metadata generation, and user account management. We seek advice on future efforts, such as: usability testing with physicists, browsing, filtering, PACS-based recommending, better crawling to accurately identify and collect physics department information, high-performance searching, and multi-lingual support.

    top back to programme


    Citebase Search: Autonomous Citation Database for e-print Archives

    Tim Brody, University of Southampton

    Abstract:
    Citebase is a culmination of the Opcit Project and the Open Archives Initiative. The Opcit Project's aim to citation-link arXiv.org was coupled with the interoperability of the OAI to develop a cross-archive search engine with the ability to harvest, parse, and link research paper bibliographies. These citation links create a classic citation database which is used to generate citation analysis and navigation over the e-print literature. Citebase is now linked from arXiv.org, alongside SLAC/SPIRES, and is integrated with e-Prints.org repositories using Paracite.

    top back to programme


    EDoc Server at Humboldt University: Interfaces and Services

    Uwe Müller, Humboldt University Berlin

    Abstract:
    Originally established solely for the publication of dissertations the document server of Humboldt University (http://edoc.hu-berlin.de/) has become the basis for the organisational and technological framework for scientific publication offered to all members of the university. To achieve a high quality of the digital documents in respect of usability and sustainability the Electronic Publisihing Group has embarked on the strategy to use XML as the main data format since the first steps were planned. In addition to this a modular and easily extensible metadata model has been developed, supporting both, diverse publication types as well as relations and hierarchies between objects. Having been the first German archive to support the Open Archives Protocol for Metadata Harvesting (OAI-PMH) as a repository the document server now hosts an OAI service provider which is designated to serve as a search machine for DINI (German Initiative for Network Information) harvesting the German OAI complient university repositories. The OAI protocol has also been applied to establish a value-added service using documents and their metadata from distributed and heterogenous document and publication servers to provide a print on demand service, called ProPrint (http://www.proprint-service.de/). For this purpose a slightly modified metadata format has been defined and the OAI inferfaces of the associated data providers have been extended by a new verb. With the aid of this technological framework users are given the possibility to search distributed archives in an integrated way and to combine the selected documents to a single PDF file which subsequently can be printed by a commercial printing s vice if required. Moreover, the document server delivers its metadata and full text documents via other interfaces to several systems, e.g. to the Digital Library of Humboldt University maintained by the university library, and to the newly established mediaserver for learning and teaching materials at Humboldt University which is built up by the Multimedia Teaching and Lerning Centre of the Computer and Media Service.

    top back to programme


    Metadata Quality

    Heinrich Stamerjohanns, Institute for Science Networking Oldenburg GmbH

    Abstract:
    While the technical framework Open Archive Protocol for Metadata Harvesting has been established and many implementations exist, now the success of metadata harvesting depends on the quality of the provided metadata. Due to the help of the Repository Explorer by Hussein Sulemann, provided metadata is now often formally correct, yet still Service Providers have difficulties interpreting the incoming metadata; neccessary normalization and the lack of shared semantics make it difficult to create additional services.

    Here we present a Dublin Core Checker, which will not focus on formal correctness of metadata records but on simple analysis of the content of metadata elements in order to give Data Providers feedback about the quality of metadata they provide.

    top back to programme


    CYCLADES: User Services for Open Archives

    Gudrun Fischer and Norbert Fuhr, University of Duisburg

    Abstract:
    CYCLADES is a system, designed to provide an open collaborative virtual archive environment, which (among others) supports users, communities (and their members) with functionality for (i) advanced search in large, heterogeneous, multidisciplinary digital archives (ii) collaboration; and (iii) filtering and recommendation.
    The document base of CYCLADES consists mainly of metadata records harvested from archives supporting the Open Archives (www.openarchives.org) standard
    Users can build their own personal library of documents in their private folders, share documents with other users in common folders, and discuss them in annotations. For a given folder topic, users can ask the system for new documents related to this topic. Queries can be stored in folders for later re-submission.
    To save the user from specifying every single data source each time, archives can be grouped into user-defined collections. Collections, documents, and even communities and users - can be recommended to other users, thus supporting knowledge exchange where desired.

    top back to programme


    Session 4:
    Novel Tools for Distributed Services


    Distributed current awareness services

    Thomas Krichel, Long Island University

    Abstract:
    This talk introduces the experience with the "NEP: New Economics Papers" service attached to the RePEc digital libary for Economics. In a terra firma part I will review history and performance of the service. In second part I will outline how NEP could become an important component of a "post-journal" academic evaluation system in the discipline.

    top back to programme


    Towards Federated Referatories

    Erik Wilde, ETH Zürich

    Abstract:
    Metadata usage often depends on schemas for metadata, which are important to convey the meaning of the metadata. We propose an architecture where users can extend the schema used by a system for managing referential metadata. Users can plugin new schemas and install custom filters for exporting metadata, so that users are not forced to limit their metadata to a fixed schema. The goal of this architecture is to provide users with a system that helps them managing their referatory, enables them with powerful tools to adapt the tool to their metadata, and still makes it possible to collect the metadata of several users in a central storage and exploit the common facets of the metadata. Our system is based on a specialized schema language, which has been built on top of the XML schema languages XML Schema and Schematron.

    top back to programme


    Integrating distributed expertise:
    The Subject Guide of the Physics Virtual Library ViFaPhys

    Esther Tobschall, TIB Hannover
    Detlef Görlitz, University of Hamburg

    Abstract:
    Providing access to information resources relevant to physicists, the Subject Guide of the Physics Vitual Library ViFaPhys contains collections of information and information resources. The contents of the Subject Guide are displayed in a clearly organised, well-structured way, sorted by subject and by resource type.
    High quality of provided information is ensured by

    • intellectual selection of resources on the basis of stated criteria,
    • evaluation and characterisation of resources by selected experts,
    • automated and intellectual checks of the included sources and their

    descriptions.
    Thus, integration of experts is crucial for getting the Subject Guide's quality: Especially scientists are qualified to judge the relevance of a resource and to evaluate its content.
    How this integration of distributed expertise is organised by the Special Interest Group 'Information' of the German Physical Society DPG will be shown in this presentation.
    The Physics Virtual Library ViFaPhys is a project funded by the Deutsche Forschungsgemeinschaft (German Research Foundation) DFG and coordinated by the TIB, the German National Library of Science and Technology.

    top back to programme


    CONESYS - the COntent NEtwork SYStem

    Sandro Zic, ZZ/OSS Information Networking

    Abstract:
    CONESYS is the Open Source COntent NEtwork SYStem for peer-to-peer content and knowledge management. In a content network, digital objects are free to move around and be replicated while they still remain accessible. With CONESYS, such content networks can be set up to improve availability and performance of Internet, Intranet, or Extranet services within or accross administrative domains. CONESYS offers a highly adaptive functionality to integrate legacy systems into a cross-server content and knowledge management infrastructure, but also to plugin new software modules and connectors into its framework.

    An innovative technology allows CONESYS to decouple any digital object from the physical nodes and addressing schemes that carry them (e.g. URLs). This is called the Distributed Digital Objects (DDO) system, based on a kind of IP address combined with an arbitrary unique identifier for digital items. A CONESYS content network basically knows two kinds of nodes: content providers and content collectors. Content providers have some type of content they make available to the rest of the network. Content Collectors are looking for a piece of content which they harvest from content providers. Potentially, any content provider can also act as a content collector, thus allowing for a wide range of network topologies, especially peer-to-peer computing.

    The network is self-configuring: Initial parameters defined in XML description files are being distributed between content collectors and providers similar to the DNS system. New content nodes will be registered automatically to the network. Communication between servers participating in the content network is achieved by interoperable or native connectors provided by CONESYS (like SOAP, XML-RPC, Java RMI, Z39.50, OAI, etc.).

    top back to programme


    Session 5:
    Distributed Open eLearning Sources and Systems


    Teachware On Demand - Development and Evaluation of a web-based architecture for self-regulated learning: Putting the Learner in charge

    Elke Brenstein, Humboldt University Berlin
    Andreas Wendt, Fraunhofer-Institut für Software- und Systemtechnik ISST

    Abstract:
    Most discussions about learning object based learning management systems focus on the benefits of modular content development from the point of view of the organization and the instructor. In our presentation, we critically discuss the advantages and challenges of creating and managing learning objects so that they can be retrieved, reused and adapted for different purposes. More importantly, however, we also pose the question regarding the benefits of dynamic meta-data enrichment for the learner. To evaluate the potential of the Teachware on Demand approach for self-regulated learning, an open and flexible web-based learning environment was developed in order to provide learners with a supportive tool for exploring the many facets of a knowledge base on "Learning how to learn". The knowledge base was especially constructed to contain a broad range of learning objects which differ in terms of representational form and suitability for different didactical purposes. The environment logs usage data and supports active involvement with the material base and communication with fellow learners by allowing learners to actively contribute to the knowledge base. Learning objects thus "become alive" through the use of dynamic "contextual" meta-data. The goal of this formative evaluation effort was to determine how learning-relevant metadata information can be visualized and made accessible for individual selection in order to enrich the learning experience.

    top back to programme


    Building information and communication competence in a collaborative learning environment (K3)

    Joachim Griesbaum, Michael Bürger, Rainer Kuhlen, Universität Konstanz

    Abstract:
    K3, work in progress, an acroncym for Kollaboration (collaboration), Kommunikation (communication) and Kompetenz (competence), will provide a knowledge management software that supports collaborative knowledge production in learning environments. The underlying hypothesis is that collaborative discourse conciliates information as well as communication competence in a better way than traditional methods of instruction. The collaborative, communicative paradigm of K3 is supported by asynchronous communication tools as a means of constructivist learning methodology.
    In summer semester 2003 the course "Communicative paradigm of knowledge management" was applied as a first case study of K3's didactic concepts in teaching with the help of traditional communication software such as an electronic communication forum, but also by using the online collaborative dictionary ENFORUM (www.enforum.net). The conceptual design of the lecture was based on blended learning and a variation and combination of behaviouristic teaching methods like traditional lecturing and constructivist teaching methods in collaborative group work orders and individual glossary work assignment (using ENFORUM).
    The students' evaluation of this lecture provided some important clues, concerning the further development of K3. Basic findings are: Individual concept oriented work is bound to high learning skills as a prerequisite. Skills that could be learned stepwise i.e. with the help of clear cut group work orders. Clear and specific working guidelines as well as immediate rating feedback are seen as very important orientation guides. Within these constraints students rate s f determined collaborative work and autonomous individual work very high and inspiring. To measure the success in learning within this paradigm is still a challenge, because permanent intellectual evaluation of students' entries in the forum and the dictionary is very costly whereas automatic rating processes with its limited quality control is not well accepted by students so far. Altogether participants judged learning success when achieved by collaborative and electronically supported techniques as least as high than success achieved when lectures where the primary means of teaching. In general, the students' feedback with regard to the didactical course concept was completely positive. On the software part, in particular with respect to the electronic communication forum, there was some complaint that available orientation means (so far mainly based on the thread paradigm) were unsufficient, but, nevertheless, in general, asynchronous communication software as a basic means for knowledge sharing was assessed as useful as long as the negative effects of cognitive overload can be avoided. Therefore one of the major challenges for K3 is the development of adequate methods for structuring communication forums and the visualization of knowledge and discourse structures in collaborative work.

    top back to programme


    Personalisation in Elena: How to cope with personalisation in distributed eLearning Networks

    Peter Dolog, Learning Lab Lower Saxony

    Abstract:
    One of the aims of ELENA project (www.elena-project.org) is to support personalized access to distributed learning repositories. In this talk we will present an approach to personalization we employed in ELENA. We take advantage of semantic web technologies and metadata description standards. Explicit descriptions of learning objects described in RDF bindings of LOM and DC, and learners in integrated RDF schema of PAPI and IMS LIP standards enable to employ reasoning and querying facilities of P2P Edutella infrastructure. Our approach is based on rule based matching of learning objects and learners descriptions to recommend learning services or learning objects provided by different providers, or to adapt and customize access, delivery, and consuming of the learning services and learning objects.

    top back to programme


    Physics Educational Resources: Distributed Retrieval and Quality

    Julika Mimkes, Institute for Science Networking Oldenburg GmbH

    Abstract:
    The project physik multimedial, funded by the German ministry for education and research, has developed a set of elearning modules to enhance physics lectures. Exercises can be created by lectures and resolved by students online with automatical control and feedback, self-study units with simulations may substitute or complete lectures, a media data base offers simuatations for downloads, LiLi - Links to physics' elearning material is a catalogue and a search engine on elearning material worldwide and a didactic module gives instructions how to learn and teach with the offers of physik multimedial. At the moment, it is also possible to create and accomplish courses with the elearning plattform of the project at no charge.

    top back to programme


    Physik Multimedial and its eLearning Platform: concepts and implementation

    Helmut Schottmüller, University of Bremen

    Abstract:
    The project physik multimedial offers its services in an open, internet based learning and teaching environment. This environment is based on the Campus Virtuell internet platform developed at the university of Oldenburg. Both systems are written in the Perl script language, working with a MySQL database server running under the Apache webserver. This assures an easy and platform independent installation and operating of the server. For client applications only a webbrowser is needed. The talk will point out the central concepts and the philosophy behind both systems. It will give an overview of the implementation of both systems and closes with the conceptual design of the future versions of physik multimedial/Campus Virtuell.

    top back to programme


    Automation Techniques for Broadcasting and Recording Lectures and Seminars

    Robert Mertens and Rüdiger Rolf, Universitšt Osnabrück

    Abstract:
    E-Learning plays an increasingly important role in modern university teaching. One fast and easy way to produce high-quality content for selected E-Learning scenarios is broadcasting and recording lectures and seminars. Due to cost-efficiency in courses shared by two or more universities and the possibility to communicate with experts from far-away locations, broadcasting technologies such as video-/audioconferencing provide an attractive addition to live courses. For conventional lectures, recordings have proven to be valuable as they make it possible to repeat important parts of the lecture for students, who were unable to attend or for those who did not grasp specific topics during the lecture.
    For most lecturers however, these new technologies are hard to use since they require constant attention and specialised technical knowledge. It is thus crucially important to reduce technology-interaction during recording and broadcasting in order to facilitate the use of these promising technologies. In the first part of the talk, we will identify a number of criteria for automating and thus easing the use of broadcasting (using video conference technology to share a lecture between two universities) and recording a lecture (to provide the recorded material to students at the home and other universities) while maintaining and in some cases even improving the quality of the filmed material.
    When broadcasting a lecture it is important to move the camera's focus to where the attention of an interested student is (to the professor, who is speaking, or to the student, who is asking a question) and to select the best audio and video inputs for this position. The automation software must make it easy for lecturers or their students to indicate what should be broadcasted.
    In recording a lecture it is essential to enrich the video with a synchronized representation of the material presented by the lecturer, i.e. PowerPoint slides, simulation programs, videos. The event of changing a slide must automatically be linked to the exact position in the video. Thus the video is synchronised with the sequence of slides the students follow on their screen.
    In the second part of the talk we will analyze to what degree these criteria are fulfilled by existing systems and we will give a brief overview of state-of-the-art technology in this field.

    top back to programme


    Learning and teaching material in interdisciplinary context

    Kerstin Zimmermann, ftw. Forschungszentrum Telekommunikation Wien

    Abstract:
    The internet has become essential to academic research during the last years. Now it is also a media for learning and teaching. What started with converted scripts at lectures' homepage spread out to java appletts for simulation and visualisation and become a commercial market for complete courses in virtual learning environments.

    Let's us focus to free avalible online material and its retrieval. Universities started to build up servers with materials for their students grouped by discilines. But what about interdisciplinary or minor subjects? Where to find them? E-publication servers are not the right place to search. In natural science several projects going on to collect and classify relevant material.

    In the talk we will give same examples for collections in engineering, mathematics and physcis. After a closer look to the typ of material the metadata will also be discussed in detail.

    top back to programme


    Session 6:
    User Outreach and Statisfaction Measurement


    The Quality of Education-related Portals in Germany: Results of a Multi-method Evaluation of the German Education Server and Beyond

    Elke Brenstein, Humboldt University Berlin

    Abstract:
    The German Education Server is a meta-server which provides reliable and up-to-date information for users in different areas of education: Primary and Secondary Education, Higher Education, Vocational Education, Continuing and Adult Education, Educational Research etc. High usage statistics and positive user feedback document general acceptance of what the portal has to offer. Log file analyses provide valuable insight in differential usage of various content areas. However, little was known so far about user-group specific satisfaction with individual aspects of the content and its presentation in comparison with other education-related portals. In this presentation, we report on a triangular evaluation effort in which results from general log file data analyses were combined with attitudes and opinions gathered in a comprehensive nation-wide representative online survey and individual feedback from in-depth usability tests to assess expectations and satisfaction of users and non-users with regard to the presentation and dissemination of educational information.

    top back to programme


    How do visitors use and rate digital dissertation archives? A case study of the Document and Publication Server of Humboldt University Berlin

    Bettina Berendt, Elke Brenstein,
    Yunfan Li, Bert Wendland, Humboldt University Berlin

    Abstract:
    Technical progress in electronic publishing affords increasingly sophisticated archiving and retrieval options for authors as well as readers of university document publishing services. But are these options really used? To be successful, a document publishing site needs not onlyrich and interesting content. It also needs an interface with high usability, and agood communications / Internet marketing strategy that makes it known to its potential users.
    In this presentation, we address usability criteria that are specific to documentpublishing sites, and we present an investigation of the usage of the Document and Publication Server of Humboldt University Berlin as a case study (http://edoc.hu-berlin.de).
    The digital dissertation archive of this server is highly structured using the SGML-based Dissertation Markup Language DiML, and the search interface offers advanced optionsfor semantic search based on these metadata. In a Web-based survey, 68 respondents gave their opinions on search features of the site and described their own usage of these features. They also gave their opinions on the electronic publishing of academic documents in general. Results confirmed that while respondents were generally content with the site's interface, most of them approached the site in a rather conventional way, "searching" with keywords and "browsing" to locate documents they need. This indicates that many online users still experience many difficulties when trying to use metadata in a structured way. We outline further work to help students and researchers use metadata productively, in research, education, and training.

    top back to programme


    Session 7:
    Coherent Realization and Political Impact


    The Research and HE Information Market

    Hans E. Roosendaal, University of Twente

    Abstract:
    The research and HE information market will in future be based on a federated network of repositories of information relating to research and education that conform to open standards, and an accommodating infrastructure that allows users the easiest and fastest possible access to information in all of these repositories. The information covered by such a network will not only comprise of information material for research and HE, but also of management information relating to this information. The market is the research and HE community; its main focus is open standards. This federated network will be global.
    This generally shared vision describes a real life network of repositories of information relating to research and education, containing both research and education information in the widest sense and management information to support access to and disclosure of this information. The user, be this a student, a teacher or a researcher, will be able to make use of this information from any site and in all possible ways. Many research and HE institutions and other knowledge-intensive organisations and companies world wide are developing novel but often disparate approaches to the management of on-line scholarly and educational resources, a.o. by creating institutional repositories. These repositories can be institutional repositories containing information products of an institution's research and education, or disciplinary repositories for either research or educational, or combinations thereof, information. To achieve wide acceptance of the network it is mandatory that it contains a sufficiently large critical mass of information material. Critical mass is also needed to be able to support a variety of value chains for the information market representing different organisational models, legal models and business models as the individual stakeholders see fit for the exchange of specific information products. The creation of a cohesive and coherent network guarantees the best return on investment for all stakeholders on their own terms, be they public (such as e.g. universities) or private (such as e.g. publishers) organisations. It is in the interest of each individual stakeholder to strive for maximum flexibility in this market place. This can best be achieved by developing a strategy that allows maximum compliance with the vision in the market place. Stakeholders should then share a basic conception of a high level strategy as starting point for developing their individual strategies.
    Main issues of such a high level strategy allowing strategy development at different aggregation levels will be discussed.

    top back to programme