Technical Architecture: Short Version

greenspun.com : LUSENET : 6805-team-6 : One Thread

This is basically how I see the problem; this is the really-short version. Please read and comment.

Public containers

We start with all "public containers." We define public with respect to any of the traditional permissions (read, write, execute, and administer). We assume that the owner of the container sets the permissions to his liking. The default permission assumes that the owner is the only writer and executor and administrator of the information in the container. The read permission is (by default) set to the permission of the parent container ("containing container" if you will) or to world-readable if there is no parent container. Thus the onus is on the owner to set permissions to private.

Moving private containers

Next we have "moving but private containers." I think we can safely say that in order for these to be protected, they must be encrypted somehow, and that labelling them as private is not enough. We can define a class of "safe" encryption techniques using some consortium and approved by the government, and it can be constantly inspected and modified as necessary. A moving container is (for example) email or an entire network "dialog" (via any network protocol) which accesses otherwise protected or unprotected files and information. The latter protects remote access to private files which are not motile. A self-contained container is one which does not exist within the context of another container (later defined). An email or network dialog which is transmitted without end-to-end encryption (i.e. encrypted by the sender for the receiver) is considered fair game for snoopers.

Stationary private containers

We have the hardest problem dealing with "stationary and private containers" which are safely accessed otherwise. A user should not have to encrypt all of his files on disk in order to feel as though he has a right to privacy. For example, I don't feel as though my Athena files (which are in my ~/Private directory and have the appropritae file permissions) need to be encrypted in order to feel as though people shouldn't read them. Here, unfortunately, we need to operate with the "expectation of privacy" of the owner of the container in mind. Also, encryption may not even be the best solution for write- and administer-type permissions for technical reasons.

For traditional computers, we can consider that there are different points of entry: (1) physical points of entry into the computer's home location, (2) all the ports available from the computer, and (3) all other network-available services or protocols (UDP?) that may not occupy a given port.

We can look at these points of entry as, well, doors into a computer. A service such as fingerd (or even a phyiscal security guard) sits as a guard on each of these doors, allowing appropriate requests in but stopping others (I hope). You can specifically turn off access through most of these ports, another good attempt at privacy from trespass. If you run fingerd you should "expect" people to finger to your machine, but shouldn't be held liable if someone exploits fingerd to get into your system through it. That seems as though it's trespass. Finally, any daemon (such as a web server, httpd) can allow access permissions as defined above. A username/password screen should specify who has access to what. The default again though is basic privacy, if there is no username/password for a particular site or set of pages (container). The rule: if you can click there, you can be there.

Renting of containers (contract-based)

Finally, we have ownership and contracts which, I hope, that this will take care of spam. This is based on the idea that some containers can exist within other containers. An example is AOL's fileserver (the largest container) contains a user's personal account (the middle-sized container) which in turn contains a user's particular file or mailbox (thus the smallest example). By contract, a user's personal account can be specified to accept whatever information the parent container (here, AOL) decides to give it (i.e. to allow or not to allow spam). Thus, a user can choose an Internet Service Provider based on whether or not that ISP allows spammers. Your mailbox is private if you decide that it is by choosing your ISP based on its policies.

Indeed, AOL should have a case for trespass-to-chattel-like violations if its own mailserver is violated via barrage of email (causing denial of service). However, the individual user does not have a case against the spammer. The individual owner of a mailbox has a case if he receives spam if and only if he had a contract with his ISP saying that he would not receive spam through that system. I don't really know exactly how an ISP would filter out all spam, and I'm sure the contracts will reflect that accordingly.

Essentially we're saying that, by nature, mailboxes are neither public nor private. They are specified by contract with the ISP, and in turn protected by the ISP as necessary.

I realize this isn't very clear, it's really late. I'll try to be more specific. There doesn't seem to be a lot of new technological stuff needed here, seems more legal.

-- Anonymous, November 29, 1998

Answers

There's a mistake above. In the section about httpd it says that the default is privacy. That's not really true if you read the rest of the paragraph. If you can click through to a page without having to fake a password or get through other security, you should be allowed to be there. The same is true of typing URLs or even telnet, ssh, rsh, whatever locations. If you don't have to fake a password or exploit bugs in a daemon, you can be there.

I know this needs to be fleshed out more.

-- Anonymous, November 29, 1998


Moderation questions? read the FAQ