You can protect your own site (or part of it) by requiring you to enter a username and password to view the content of the site. The example shows how to protect (only) your entire home site. Securing a part of the site follows the same principle, but of course the public_html directory is then replaced by the path of the desired directory to be protected, e.g. public_html/subdirectory etc.
To password protect a website, two files are needed: a .htaccess file to be created in the directory to be protected, specifying the protection to be used in that directory, and an ID-password file (e.g.) users which contains the usernames and passwords to access the protected page.
In the following examples, the Putty software has been used for the connection and Pico for creating and editing the files. Of course, other software can be used if desired.
Creating .htaccess -file:
AuthUserFile /nashomeX/käyttäjätunnuksesi/public_html/hidden/users
AuthGroupFile /dev/null
AuthName ByPassword
AuthType Basic
<Limit GET>
require valid-user
</Limit>
Set the read and write permissions on the file you created with chmod ugo+r .htaccess
Creating users -file
Verify that the protection works by opening your web browser and going to your secure page (e.g. http://users.jyu.fi/~username). If all went well, the page should only open once the specified ID and password have been entered.
Access to a page can be restricted, for example, to machines in a specific named network domain, by means of an .htaccess file.
An example of the contents of an .htaccess file:
require host jyu.fi
require ip 130.234.10.216
require not host it.jyu.fi
This allows access to the university's network domain (jyu.fi) from a machine with an ip address of 130.234.10.216. Access from other machines is denied. In addition, access is specifically denied to machines in the it.jyu.fi domain under jyu:fi as an example.
If you want to restrict the use of some access methods only (GET, PUT, POST...) then the access requirements should be placed inside the <LIMIT> directive. For example.
<Limit GET>
require host halava.cc.jyu.fi
</Limit>
NOTE! the syntax of the access requirements specifications has changed in the web server update!
By default, your own website is public and open to the world. However, there may be times when you need to prevent search robots from crawling your own website. Usually there is a server-specific robots.txt file but users on a shared server do not have access to it. In this case, the alternative may be to restrict access by ip address or by restricting access behind a password.
However, the simplest way without access restrictions is to add a <head> element at the top of every html file that you want to prevent from being indexed:
<meta name="robots" content="noindex,nofollow">
Eli esim.
<html>
<head>
<title>Sivun otsikko
</title>
<meta name="robots" content="noindex,nofollow"> </head>
<body>
Tässä sisältöä jota ei indeksoida
</body>
This means that you cannot prevent the indexing of the contents of the entire directory, only the contents of the html file and its links. For example, if there are image files in the directory and they are linked to from somewhere other than your own website, this method may not have any effect.
For more information, see e.g. http://www.heikniemi.fi/kirj/web/robots.html
The HelpJYU service portal guides provide step-by-step instructions for individual user questions and problems. You can find more instructions by searching the top right corner of the HelpJYU portal.
Also check out the more extensive instructions for using digital services on the digital services website.