Web Focus Corner: Running an Institutional Web Service
About The Workshop
Excellent; good opportunity to update knowledge and meet other, A much needed workshop. Very useful to hear from speakers and finding out about other sites from discussion groups. Same again next year please! , Extremely useful and timely . Just three of the comments received from participants of the workshop on Running An Institutional Web Service.
The workshop was held at King’s College London from lunchtime on Wednesday, 16th July until lunchtime the following day. A total of 92 participants attended the workshop, together with myself and Hazel Gott (both from UKOLN) and Malcolm Clark (KCL) who were the conference organisers.
The workshop participants reflected the composition of many institutional web teams, with representation from system administrators, applications support staff, web editors, information management specialists and web designers.
Workshop Topics
Presentations at the workshop reflected the range of backgrounds amongst the participants, with a mixture of introductory and more advanced technical presentations, and presentations on policy issues and information management models, covering the following topics:
- Charisma or camel? A sociotechnical approach to Web redesign
- Information Flow and the Institutional WWW
- Networking For Webmasters
- WWW / Database Integration
- Security and Performance Issues
- WWW Caching
- Web Tools
- Next Year’s Web
The presentations were given on the afternoon of the first day. In the final session on the first day, participants selected a discussion group to take part in on day 2. The following discussion groups took place:
- Design
- Information Flow
- Web Tools
- Caching
- Metadata
- Trials and Tribulations of a Web Editor
The Presentations
Charisma or Camel? A Sociotechnical Approach to Web Redesign [1]
Following the introduction to the workshop, Dave Murie, University of Dundee, gave a review of the processes involved in the redesign of the corporate web pages within his institution. The history of the development of the web site will be familiar to many: an enthusiast initially establishing a web presence, which slowly grew to include a handful of departments, followed by an awareness of the problems of this organic growth, with no group responsible for overall coordination and no overall look and feel for the page.
These issues were addressed by setting up an Electronic Publishing Editorial Board (EPEB), a high profile committee which reported to the University’s Information Services Committee. The EPEB was responsible for providing guidance of aesthetics, a code of practice for webmasters and ensuring that legal and ethical obligations were met. The initial recommendations included (a) the need for a disclaimer to indicate limits of institutional responsibility, (b) the need for legal advice of liability, © the need for guidelines for information providers and mechanisms for encouraging/enforcing compliance and (d) redesigning the institute’s “home page” which was felt to be too long.
The redesign exercise was carried out by the School of Design. A number of user groups were identified which included “techies”, who had an understanding of web technologies (such as accessibility issues for the disabled) and postgraduate students, who could provide an end user view on proposals.
The student’s view of the web was that the redesign exercise should address the confused look of the site, look for ease of learning and ease of use, and improve the performance of the web. The Editorial Board felt that too much information was provided in a cramped layout. There was a need to split the information to support two separate needs: intra- and inter-organisational. There was also a need to improve the quality of the visual image to support the web’s increasing importance for student recruitment.
Following these initial discussions an iterative design approach was taken, with the School of Design providing a number of design options, which were refined in the light of usability testing and comments from user groups.
Dave concluded by identifying the positive points of Dundee’s approach, which included the use of multiple stakeholders (Human Computer Interface specialists, design specialists, users and departmental information providers), the provision of a tool kit and libraries of images for use by information providers and a suitable compromise between capability with practicability.
Dave also listed a number of problem areas, including concerns over accessibility for disabled users and information providers attempting to use the latest technologies which could cause backwards compatibility problems. Web designers also found the web to be a frustrating medium!
In conclusion:
- The redesign exercise was a major task, which required a stakeholder approach.
- There were dangers both in being technology-led and designer-led.
- It is important to keep the needs of the user in mind.
- Good communications is of critical importance.
Information Flow [2]
Following the talk on approaches to web design, Colin Work, University of Southampton, gave a presentation on Information Flow and the Institutional WWW. Colin’s talk was based on his institution’s web-based information services. The talk described information flow models and was independent of any particular technology.
Colin defined ‘information flow’ as the movement of information objects from the point of origin to the “target” user over time. Information objects (which in other contexts are sometimes referred to as Document-Like Objects or DLOs) have a number of managerial attributes (or metadata) including:
- Author of the object: responsible for the information content
- Owner (publisher): responsible for making the information available
- Encoder / printer: involved in processing of the information
All objects should have a single “owner”, though they may have multiple authors and encoders.
The “authority” of an object is an additional implied attribute. If this is not made explicit, false assumptions may be made.
A web resource is a snapshot of the information flow. This gives rise to a number of questions:
- Is the snapshot at the right point in the flow?
- Does the snapshot carry the required authority?
- Is the ownership clear?
- Are the author and owner aware of all uses of the information?
A number of “tools” can be used to support the information flow, including:
- An information strategy for setting goals
- An information policy which provides a methodology for achieving the goals
- An acceptable use policy which defines constraints
- An editorial board for enforcement of the above
- Charging mechanisms (both real and notional) which can help to limit demands
Institutions may adopt a number of information management models including:
- No management
- An “anarchical” model, which reflects how the web initially developed within many institutions. This model facilitates rapid growth, but has many disadvantages including uneven coverage, diverse interfaces, contradictory information, uncertain responsibilities, etc.
- Centralised management
- The centralised management model will be familiar with managers of information systems available before the web. In this model, information has to be sent to a centralised group which is responsible for “processing” the information before it is made available. This model provides consistency in style and presentation, but does not scale well, with a potential bottleneck in the information flow.
- Distributed management
- In this model, responsibilities can be devolved to departments/groups, providing a more responsive and flexible service. However, the model can tend towards the “No management” model unless there are effective information policies in place with an editorial board to ensure compliance.
Colin than gave an example of how different models could be applied to making an institution’s prospectus available on the web. These models included:
- Processing file used to produce hard copy
- This can involve reformatting, which can be time-consuming. The online version may have an inappropriate design.
- “Intercepting” information sent from department to the Prospectus editor
- This model is more flexible. However, there is a danger that changes made by the Prospectus editor will not be reflected in the online version.
- Merging online versions produced by departments to produce print version
- In this model, departments provide their own contribution for the prospectus on their departmental web pages. This information is processed centrally for the production of the print version. In this model there are dangers that the departmental quality control is not as rigorous as that provided centrally. Also the departmental information may contain information which has not been authorised (e.g. courses which have not been approved by a course committee).
- Producing online and print versions in parallel
- This model may require additional resources and technical expertise.
Colin concluded by recommending that institutions have an information strategy / policy and an editorial board which is empowered to enforce institutional policies. Distributed and centralised management models have strengths and weaknesses. The choice is likely to reflect the organisation’s culture.
Networking For The Web Master [3]
John MacCulloch, UKERNA, gave a gentle introduction to the first technical presentation of the workshop, with a background to computer networks. John described networking specialists’ concerns with network bandwidth. He gave an example of a personal home page with the following file sizes:
- Raw text - 1K
- HTML - 2K
- HTML plus image - 40K
- Postscript - 100 K
- Postscript (300 dpi) - 1,000K
Law’s law (named after Derek Law, Director of Information Services and Systems at King’s College London) states that A picture may be worth a thousand words but a JPEG file takes longer to transmit. An awareness of bandwidth issues is of particular importance today in the light of various discussions of charging models for paying for international bandwidth to the US, Europe and elsewhere.
John concluded by advising participants to look at their web site through different eyes: for example using a number of web clients and across low bandwidths.
Database Integration [4]
After the coffee break Brenda Lowndes, University of Liverpool, gave an overview of various models for integrating databases with the web.
This is an important topic for a number of reasons:
- Providing access to existing corporate database
- Use of database management tools for maintaining information (e.g. avoiding duplication of data and resources to maintain the data and avoiding inconsistencies across data)
- Providing a familiar (web) interface to resources
Databases can be made available using two models: (1) static access and (2) dynamic access. In the static access model a program is run offline to produce the HTML pages, which can then be copied onto the appropriate area of the web server. The process can be automated and scheduled to run at specific times or triggered by a database update operation. This model is appropriate for non volatile data and for supporting a standard set of queries.
In the dynamic access model HTML pages are generated from the database when the data is requested. Data is therefore up to date. There are several models for implementing dynamic accesses including running a CGI program which access a backend database directly, running a CGI program which accesses a database server, accessing a database server using database networking software or using ODBC (Open DataBase Connectivity), and using web server extensions in conjunction with HTML templates.
Applications of these models include Microsoft’s IDC (Internet Database Connectivity) - an extension of Microsoft’s Internet Information Server and Microsoft’s Active Server Pages - a server-side environment which can access any ODBC database.
Brenda provided a pointer to sources of further information she is maintaining [5].
Security [6]
Mark Cox, UKWeb, gave an overview of security and performance issues. Mark Cox is a former research student from the University of Bradford who now works for a web company based in Leeds. Mark is a member of the Apache development group.
Mark pointed out that malicious ActiveX components can potentially read and modify files on a local machine. Even though Java has a better security model, bugs in the implementation of a Java Virtual Machine can introduce security loopholes.
In practice however, privacy is likely to be a greater concern (email and the use of floppy disks, for example, pose security threats but neither is banned within universities - Brian Kelly). Browsers are designed for use by a single user. Use of a shared computer can compromise a user’s privacy by providing access to history lists, client side cache, cookie files and information protected by passwords.
Allowing users to publish static HTML pages does not normally compromise the security of a system, active pages are another matter.
Web servers such as Apache can be configured to allow safe use of Server-Side Include technologies, so that files for which the user may have read permissions cannot be served. CGI scripts can be more of a security concern. CGI programs execute as a single user. A badly configured web server, in which CGI programs were executed as a privileged user (such as root!), would enable a malicious information provider to cause damage. Such a configuration could also allow an innocent information provider to inadvertently provide a back door for an end user to cause damage. For these reasons, some Internet service providers do not allow information providers to run arbitrary CGI programs - instead a library is provided for common functions, such as page counters, mailing results of form submissions, etc.
Caching [7]
George Neisser, University of Manchester, described the important role that caching has in conserving scarce network bandwidth. He described various caching infrastructures, including implementations at departmental, institutional, national and international level. George described the history of the national caching infrastructure within the UK HE community, which was pioneered by HENSA, at the University of Kent at Canterbury. On 1st August 1997 this service was replaced by a new national service hosted by the universities of Manchester and Loughborough. This service has a service and a development component. The service will provide a number of mechanisms for liaison with the user communities, including a web site, a regular newsletter, mailing lists, a help desk and a fault reporting system.
Web Tools [8]
David Lomas, University of Salford, described the work of the UCISA-SG WebTools working party. This group will be evaluating a range of web tools, including browsers and other end user tools, authoring and graphics tools, web page management tools, server software and management tools and database integration tools.
Next Year’s Web [9]
Brian Kelly, UKOLN, University of Bath, gave a brief summary of new developments on the web and predictions on how the web will look in a year’s time. He mentioned eXML, the Extensible Markup Language, which is a possible replacement for XML, and ideally suited for applications with structured information. He described the development of visualisation software, and accompanying protocol developments, for improving navigation around web sites. The development of metadata standards, such as Dublin Core, should provide improvements in searching and in web site management. Other growth areas are likely to include use of Java, support for maths and improved network performances through developments in HTTP.
The Discussion Groups
On day 2 the participants were divided into six discussion groups. A summary of the discussion groups is given below.
Design
The group made a number of recommendations:
- A workshop on design issues should be arranged. The workshop should ensure that issues affecting people with disabilities and those with low bandwidth connections are addressed.
- A list of resources on recommendations on copyright issues, etc. should be provided.
Information Flow
The group made a number of recommendations:
- Set up a Working Group to look into administrative metadata.
- Establish national UK HE recommendations for administrative metadata .
Web Tools
The group made the following recommendation:
- Set up a group (similar to AGOCG) to coordinate evaluation work on web tools.
Caching
The group concluded that there is a need to raise awareness of caching, particularly in the light of the increasing costs of international bandwidth.
Metadata
The group agreed that metadata was good in theory. The major questions were “How do you implement it, especially in a distributed environment? Are there tools to help? Can it be automated?”
The group made a number of recommendations:
- Use of Dublin Core metadata should be encouraged.
- A pilot for metadata should be set up for use with prospectus material.
Trials and Tribulations of a Web Editor
The group made a number of recommendations:
- A presentation should be made to a meeting of the CVCP (Committee of Vice Chancellors and Principals) to demonstrate the importance of the WWW to UK HE institutions, with the aim of increasing senior management understanding of and support for the use of the Web within their institutions.
- A standardised job description for a typical Web Editor position should be produced.
Conclusions and Further Information
The feedback given at the workshop and in the evaluation forms indicated that, despite some concerns over the conditions at King’s College London (the temperature of the room and the noise caused by building work), the workshop was highly rated. It was clear that participants valued the opportunity to meet with others and discuss mutual problems and solutions.
A number of participants suggested that a national workshop for members of web teams should be repeated, but should last longer than lunchtime to lunchtime. In addition, there were several suggestions for smaller, more focussed workshops, on topics such as web design and management issues.
A longer report on the workshop has been prepared [10]. In addition, Juston MacNeil, Netskills, has written a personal report [11].
References
- Charisma or camel? A sociotechnical approach to Web redesign,
http://www.ukoln.ac.uk/web-focus/events/workshops/webmaster-jul1997/intro.html#design-presentation - Information Flow,
http://www.ukoln.ac.uk/web-focus/events/workshops/webmaster-jul1997/intro.html#info-flow-presentation - Networking For The WebMaster,
http://www.ukoln.ac.uk/web-focus/events/workshops/webmaster-jul1997/intro.html#networking-presentation - WWW / Database Integration,
http://www.ukoln.ac.uk/web-focus/events/workshops/webmaster-jul1997/intro.html#database-presentation - Database references,
http://www.liv.ac.uk/~qq48/publications/html/dbweb.html - Security,
http://www.ukoln.ac.uk/web-focus/events/workshops/webmaster-jul1997/intro.html#security-presentation - Caching,
http://www.ukoln.ac.uk/web-focus/events/workshops/webmaster-jul1997/intro.html#caching-presentation - Web Tools,
http://www.ukoln.ac.uk/web-focus/events/workshops/webmaster-jul1997/intro.html#webtools-presentation - Next Year’s Web,
http://www.ukoln.ac.uk/web-focus/events/workshops/webmaster-jul1997/intro.html#futures-presentation - Workshop Information,
http://www.ukoln.ac.uk/web-focus/events/workshops/webmaster-jul1997/intro.html - Justtin MacNeil’s workshop report,
http://www.netskills.ac.uk/reports/webfocus/inst-web-jul97.html
Author Details
Brian KellyUK Web Focus,
Email: B.Kelly@ukoln.ac.uk
UKOLN Web Site: http://www.ukoln.ac.uk/
Tel: 01225 826838
Address: UKOLN, University of Bath, Bath, BA2 7AY