summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--README.md8
-rw-r--r--doc/markdown/auth_guide.md12
-rw-r--r--doc/markdown/concepts.md2
-rw-r--r--doc/markdown/object-expiration.md4
-rw-r--r--doc/markdown/quick_start_guide.md13
-rw-r--r--doc/markdown/s3.md1
-rw-r--r--doc/markdown/user_guide.md66
-rw-r--r--glusterfs-openstack-swift.spec4
8 files changed, 28 insertions, 82 deletions
diff --git a/README.md b/README.md
index 42ab315..8b7602d 100644
--- a/README.md
+++ b/README.md
@@ -1,3 +1,7 @@
# Gluster-Swift
-Gluster-Swift enables files and directories created on GlusterFS
-to be accessed as objects via the Swift API.
+Gluster-Swift provides object interface to GlusterFS volumes. It allows files
+and directories created on GlusterFS volume to be accessed as objects via the
+OpenStack Swift and S3 API.
+
+Please refer to [quick start guide](doc/markdown/quick_start_guide.md)
+to get started.
diff --git a/doc/markdown/auth_guide.md b/doc/markdown/auth_guide.md
index 86c3650..f843c13 100644
--- a/doc/markdown/auth_guide.md
+++ b/doc/markdown/auth_guide.md
@@ -51,7 +51,7 @@ otherwise you can install it via pip:
sudo pip install python-keystoneclient
-### <a name="keystone_swift_accounts />Creation of swift accounts ###
+### <a name="keystone_swift_accounts" />Creation of swift accounts ###
Due to current limitations of gluster-swift, you *must* create one
volume for each Keystone tenant (project), and its name *must* match
@@ -205,19 +205,19 @@ See <http://gholt.github.com/swauth/> for more information on Swauth.
1. GSwauth is installed by default with Gluster-Swift.
-1. Create and start the `gsmetadata` gluster volume
+2. Create and start the `gsmetadata` gluster volume
~~~
gluster volume create gsmetadata <hostname>:<brick>
gluster volume start gsmetadata
~~~
-1. run `gluster-swift-gen-builders` with all volumes that should be
+3. run `gluster-swift-gen-builders` with all volumes that should be
accessible by gluster-swift, including `gsmetadata`
~~~
gluster-swift-gen-builders gsmetadata <other volumes>
~~~
-1. Change your proxy-server.conf pipeline to have gswauth instead of tempauth:
+4. Change your proxy-server.conf pipeline to have gswauth instead of tempauth:
Was:
~~~
@@ -230,7 +230,7 @@ pipeline = catch_errors cache tempauth proxy-server
pipeline = catch_errors cache gswauth proxy-server
~~~
-1. Add to your proxy-server.conf the section for the GSwauth WSGI filter:
+5. Add to your proxy-server.conf the section for the GSwauth WSGI filter:
~~~
[filter:gswauth]
use = egg:gluster_swift#gswauth
@@ -243,7 +243,7 @@ token_life = 86400
max_token_life = 86400
~~~
-1. Restart your proxy server ``swift-init proxy reload``
+6. Restart your proxy server ``swift-init proxy reload``
##### Advanced options for GSwauth WSGI filter:
diff --git a/doc/markdown/concepts.md b/doc/markdown/concepts.md
deleted file mode 100644
index 2ad3f25..0000000
--- a/doc/markdown/concepts.md
+++ /dev/null
@@ -1,2 +0,0 @@
-# Overview and Concepts
-TBD
diff --git a/doc/markdown/object-expiration.md b/doc/markdown/object-expiration.md
index a61818a..e1798bc 100644
--- a/doc/markdown/object-expiration.md
+++ b/doc/markdown/object-expiration.md
@@ -7,18 +7,21 @@
* [Running object-expirer daemon](#running-daemon)
<a name="overview" />
+
## Overview
The Object Expiration feature offers **scheduled deletion of objects**. The client would use the *X-Delete-At* or *X-Delete-After* headers during an object PUT or POST and the cluster would automatically quit serving that object at the specified time and would shortly thereafter remove the object from the GlusterFS volume.
Expired objects however do appear in container listings until they are deleted by object-expirer daemon. This behaviour is expected: https://bugs.launchpad.net/swift/+bug/1069849
<a name="setup" />
+
## Setup
Object expirer uses a seprate account (a GlusterFS volume, for now, until multiple accounts per volume is implemented) named *gsexpiring*. You will have to [create a GlusterFS volume](quick_start_guide.md#gluster-volume-setup) by that name.
Object-expirer uses the */etc/swift/object-expirer.conf* configuration file. Make sure that it exists. If not, you can copy it from */etc* directory of gluster-swift source repo.
<a name="using" />
+
## Using object expiration
**PUT an object with X-Delete-At header using curl**
@@ -55,6 +58,7 @@ swift --os-auth-token=AUTH_tk99a39aecc3dd4f80b2b1e801d00df846 --os-storage-url=h
where *X-Delete-After* header takes a integer number of seconds, after which the object expires. The proxy server that receives the request will convert this header into an X-Delete-At header using its current time plus the value given.
<a name="running-daemon" />
+
## Running object-expirer daemon
The object-expirer daemon runs a pass once every X seconds (configurable using *interval* option in config file). For every pass it makes, it queries the *gsexpiring* account for "tracker objects". Based on (timestamp, path) present in name of "tracker objects", object-expirer then deletes the actual object and the corresponding tracker object.
diff --git a/doc/markdown/quick_start_guide.md b/doc/markdown/quick_start_guide.md
index 9312ae9..40b5439 100644
--- a/doc/markdown/quick_start_guide.md
+++ b/doc/markdown/quick_start_guide.md
@@ -8,6 +8,7 @@
* [What now?](#what_now)
<a name="overview" />
+
## Overview
Gluster-swift project enables object based access (over Swift and S3 API)
to GlusterFS volumes.This guide is a great way to begin using gluster-swift,
@@ -24,6 +25,7 @@ the installation packages may vary.
> NOTE: In Gluster-Swift, accounts must be GlusterFS volumes.
<a name="gluster_setup" />
+
## Setting up GlusterFS
### Installing and starting GlusterFS
@@ -88,16 +90,17 @@ Mount the GlusterFS volume:
```
<a name="swift_setup" />
+
## Setting up gluster-swift
-### Installing Openstack Swift (kilo version)
+### Installing Openstack Swift (newton version)
If on Ubuntu 16.04:
```sh
# apt install python-pip libffi-dev memcached
# git clone https://github.com/openstack/swift; cd swift
-# git checkout -b kilo tags/kilo-eol
+# git checkout -b release-2.10.1 tags/2.10.1
# pip install -r ./requirements.txt
# python setup.py install
```
@@ -105,11 +108,11 @@ If on Ubuntu 16.04:
If on CentOS 7:
```sh
-# yum install centos-release-openstack-kilo
+# yum install centos-release-openstack-newton
# yum install openstack-swift-*
```
-### Installing gluster-swift (kilo version)
+### Installing gluster-swift (newton version)
If on Ubuntu 16.04:
@@ -173,6 +176,7 @@ Use the following commands to start gluster-swift:
```
<a name="using_swift" />
+
## Using gluster-swift
### Create a container
@@ -215,6 +219,7 @@ following commands:
```
<a name="what_now" />
+
## What now?
For more information, please visit the following links:
diff --git a/doc/markdown/s3.md b/doc/markdown/s3.md
index 086718e..53ebdc9 100644
--- a/doc/markdown/s3.md
+++ b/doc/markdown/s3.md
@@ -182,6 +182,7 @@ This is not required when `auth_type` is set to `Plaintext`
```
<a name="no_auth" />
+
### No auth middleware in pipeline
In local and insecure deployments such as within an organization/department where no authentication middleware is in the proxy pipeline, you should just use glusterfs volume name as `id` or `AWSAccessKeyId`.
diff --git a/doc/markdown/user_guide.md b/doc/markdown/user_guide.md
deleted file mode 100644
index 6108832..0000000
--- a/doc/markdown/user_guide.md
+++ /dev/null
@@ -1,66 +0,0 @@
-# User Guide
-
-## Installation
-
-### GlusterFS Installation
-First, we need to install GlusterFS on the system by following the
-instructions on [GlusterFS QuickStart Guide][].
-
-### Fedora/RHEL/CentOS
-Gluster for Swift depends on OpenStack Swift Grizzly, which can be
-obtained by using [RedHat's RDO][] packages as follows:
-
-~~~
-yum install -y http://rdo.fedorapeople.org/openstack/openstack-grizzly/rdo-release-grizzly.rpm
-~~~
-
-### Download
-Gluster for Swift uses [Jenkins][] for continuous integration and
-creation of distribution builds. Download the latest RPM builds
-from one of the links below:
-
-* RHEL/CentOS 6: [Download](http://build.gluster.org/job/gluster-swift-builds-cent6/lastSuccessfulBuild/artifact/build/)
-* Fedora 18+: [Download](http://build.gluster.org/job/gluster-swift-builds-f18/lastSuccessfulBuild/artifact/build/)
-
-Install the downloaded RPM using the following command:
-
-~~~
-yum install -y RPMFILE
-~~~
-
-where *RPMFILE* is the RPM file downloaded from Jenkins.
-
-## Configuration
-TBD
-
-## Server Control
-Command to start the servers (TBD)
-
-~~~
-swift-init main start
-~~~
-
-Command to stop the servers (TBD)
-
-~~~
-swift-init main stop
-~~~
-
-Command to gracefully reload the servers
-
-~~~
-swift-init main reload
-~~~
-
-### Mounting your volumes
-TBD
-
-Once this is done, you can access GlusterFS volumes via the Swift API where
-accounts are mounted volumes, containers are top-level directories,
-and objects are files and sub-directories of container directories.
-
-
-
-[GlusterFS QuickStart Guide]: http://www.gluster.org/community/documentation/index.php/QuickStart
-[RedHat's RDO]: http://openstack.redhat.com/Quickstart
-[Jenkins]: http://jenkins-ci.org
diff --git a/glusterfs-openstack-swift.spec b/glusterfs-openstack-swift.spec
index 8547eea..7a87141 100644
--- a/glusterfs-openstack-swift.spec
+++ b/glusterfs-openstack-swift.spec
@@ -2,7 +2,7 @@
# The following values are provided by passing the following arguments
# to rpmbuild. For example:
-# --define "_version 1.0" --define "_release 1" --define "_name g4s"
+# --define "_version 1.0" --define "_release 1" --define "_name g4s"
#
%{!?_version:%define _version __PKG_VERSION__}
%{!?_name:%define _name __PKG_NAME__}
@@ -103,7 +103,7 @@ done
%changelog
* Wed May 10 2017 Venkata R Edara <redara@redhat.com> - 2.10.1
-- Rebase to Swift 2.10.1 (newton)
+- Rebase to Swift 2.10.1 (newton)
* Tue Mar 15 2016 Prashanth Pai <ppai@redhat.com> - 2.3.0-0
- Rebase to swift kilo (2.3.0)