<feed xmlns='http://www.w3.org/2005/Atom'>
<title>glusterfs-afrv1.git/libglusterfs/src, branch v3.4.0beta1</title>
<subtitle></subtitle>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/'/>
<entry>
<title>Fix uninitialized mutex usage in synctask_destroy</title>
<updated>2013-05-04T07:32:22+00:00</updated>
<author>
<name>Emmanuel Dreyfus</name>
<email>manu@netbsd.org</email>
</author>
<published>2013-05-01T04:23:57+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=dfa76943df9c36c3c7f5b31cf153b3c4bbc2ac2e'/>
<id>dfa76943df9c36c3c7f5b31cf153b3c4bbc2ac2e</id>
<content type='text'>
synctask_new() initialize task-&gt;mutex is task-&gt;synccbk is NULL.
synctask_done() calls synctask_destroy() if task-&gt;synccbk is not NULL.
synctask_destroy() always destroys the mutex.

Fix that by checking for task-&gt;synccbk in synctask_destroy()

This is a backport of I50bb53bc6e2738dc0aa830adc4c1ea37b24ee2a0

BUG: 764655
Change-Id: I3d6292f05a986ae3ceee35161791348ce3771c12
Signed-off-by: Emmanuel Dreyfus &lt;manu@netbsd.org&gt;
Reviewed-on: http://review.gluster.org/4920
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
synctask_new() initialize task-&gt;mutex is task-&gt;synccbk is NULL.
synctask_done() calls synctask_destroy() if task-&gt;synccbk is not NULL.
synctask_destroy() always destroys the mutex.

Fix that by checking for task-&gt;synccbk in synctask_destroy()

This is a backport of I50bb53bc6e2738dc0aa830adc4c1ea37b24ee2a0

BUG: 764655
Change-Id: I3d6292f05a986ae3ceee35161791348ce3771c12
Signed-off-by: Emmanuel Dreyfus &lt;manu@netbsd.org&gt;
Reviewed-on: http://review.gluster.org/4920
Reviewed-by: Kaleb KEITHLEY &lt;kkeithle@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>syncenv: be robust against spurious wake()s</title>
<updated>2013-04-17T12:46:09+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2013-04-15T10:25:28+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=1787debc1b6640e15a02ccac4699b92affb2bb14'/>
<id>1787debc1b6640e15a02ccac4699b92affb2bb14</id>
<content type='text'>
In the current implementation, when the callers of synctasks perform
a spurious wake() of a sleeping synctask (i.e, an extra wake() soon
after a wake() which already woke up a yielded synctask), there is
now a possibility of two sync threacs picking up the same synctask.
This can result in a crash. The fix is to change -&gt;slept = 0|1 and
membership of synctask in runqueue atomically.

Today we dequeue a task from the runqueue in syncenv_task(), but
reset -&gt;slept = 0 much later in synctask_switchto() in an unlocked
manner -- which is safe, when there are no spurious wake()s.

However, this opens a race window where, if a second wake() happens
after the dequeue, but before setting -&gt;slept = 0, it results in
queueing the same synctask in the runqueue once again, and get
picked up by a different synctask.

This is has been diagnosed to be the crashes in the regression tests
of http://review.gluster.org/4784. However that patch still has a
spurious wake() [the trigger for this bug] which is yet to be fixed.

BUG: 948686
Change-Id: I51858e887cad2680e46fb973629f8465f4429363
Original-author: Anand Avati &lt;avati@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4833
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In the current implementation, when the callers of synctasks perform
a spurious wake() of a sleeping synctask (i.e, an extra wake() soon
after a wake() which already woke up a yielded synctask), there is
now a possibility of two sync threacs picking up the same synctask.
This can result in a crash. The fix is to change -&gt;slept = 0|1 and
membership of synctask in runqueue atomically.

Today we dequeue a task from the runqueue in syncenv_task(), but
reset -&gt;slept = 0 much later in synctask_switchto() in an unlocked
manner -- which is safe, when there are no spurious wake()s.

However, this opens a race window where, if a second wake() happens
after the dequeue, but before setting -&gt;slept = 0, it results in
queueing the same synctask in the runqueue once again, and get
picked up by a different synctask.

This is has been diagnosed to be the crashes in the regression tests
of http://review.gluster.org/4784. However that patch still has a
spurious wake() [the trigger for this bug] which is yet to be fixed.

BUG: 948686
Change-Id: I51858e887cad2680e46fb973629f8465f4429363
Original-author: Anand Avati &lt;avati@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4833
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>synctask: introduce synclocks for co-operative locking</title>
<updated>2013-04-17T08:53:20+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2013-04-15T10:11:21+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=563b608126e812482a25464df7c70079fb0ba2c0'/>
<id>563b608126e812482a25464df7c70079fb0ba2c0</id>
<content type='text'>
This patch introduces a synclocks - co-operative locks for synctasks.
Synctasks yield themselves when a lock cannot be acquired at the time
of the lock call, and the unlocker will wake the yielded locker at
the time of unlock.

The implementation is safe in a multi-threaded syncenv framework.

It is also safe for sharing the lock between non-synctasks. i.e, the
same lock can be used for synchronization between a synctask and
a regular thread. In such a situation, waiting synctasks will yield
themselves while non-synctasks will sleep on a cond variable. The
unlocker (which could be either a synctask or a regular thread) will
wake up any type of lock waiter (synctask or regular).

Usage:

    Declaration and Initialization
    ------------------------------

    synclock_t lock;

    ret = synclock_init (&amp;lock);
    if (ret) {
        /* lock could not be allocated */
    }

   Locking and non-blocking lock attempt
   -------------------------------------

   ret = synclock_trylock (&amp;lock);
   if (ret &amp;&amp; (errno == EBUSY)) {
      /* lock is held by someone else */
      return;
   }

   synclock_lock (&amp;lock);
   {
      /* critical section */
   }
   synclock_unlock (&amp;lock);

BUG: 763820
Change-Id: I23066f7b66b41d3d9fb2311fdaca333e98dd7442
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Original-author: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4830
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch introduces a synclocks - co-operative locks for synctasks.
Synctasks yield themselves when a lock cannot be acquired at the time
of the lock call, and the unlocker will wake the yielded locker at
the time of unlock.

The implementation is safe in a multi-threaded syncenv framework.

It is also safe for sharing the lock between non-synctasks. i.e, the
same lock can be used for synchronization between a synctask and
a regular thread. In such a situation, waiting synctasks will yield
themselves while non-synctasks will sleep on a cond variable. The
unlocker (which could be either a synctask or a regular thread) will
wake up any type of lock waiter (synctask or regular).

Usage:

    Declaration and Initialization
    ------------------------------

    synclock_t lock;

    ret = synclock_init (&amp;lock);
    if (ret) {
        /* lock could not be allocated */
    }

   Locking and non-blocking lock attempt
   -------------------------------------

   ret = synclock_trylock (&amp;lock);
   if (ret &amp;&amp; (errno == EBUSY)) {
      /* lock is held by someone else */
      return;
   }

   synclock_lock (&amp;lock);
   {
      /* critical section */
   }
   synclock_unlock (&amp;lock);

BUG: 763820
Change-Id: I23066f7b66b41d3d9fb2311fdaca333e98dd7442
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Original-author: Anand Avati &lt;avati@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4830
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rpc: before freeing the volume options object, delete it from the list</title>
<updated>2013-04-12T17:07:23+00:00</updated>
<author>
<name>Raghavendra Bhat</name>
<email>raghavendra@redhat.com</email>
</author>
<published>2013-03-18T13:44:24+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=30b4337d11d361cc8e0122bdb0d2ced09243813a'/>
<id>30b4337d11d361cc8e0122bdb0d2ced09243813a</id>
<content type='text'>
* Suppose there is an xlator option which is considered by the xlator
  only if the source was built with debug mode enabled (the only example
  in the current code base is run-with-valgrind option for glusterd), then
  giving that option would make the process crash if the source was not
  built with debug mode enabled.

  Reason:
  In rpc, after getting the options symbol dynamically, it was stored in the
  newly allocated volume options structure and the structure's list head was
  added to the xlator's volume_options list. But while freeing the structure
  the list was not deleted. Thus when the list was traversed, already freed
  structure was accessed leading to segfault.

Change-Id: I3e9e51dd2099e34b206199eae7ba44d9d88a86ad
BUG: 922877
Signed-off-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4687
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-on: http://review.gluster.org/4818
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
* Suppose there is an xlator option which is considered by the xlator
  only if the source was built with debug mode enabled (the only example
  in the current code base is run-with-valgrind option for glusterd), then
  giving that option would make the process crash if the source was not
  built with debug mode enabled.

  Reason:
  In rpc, after getting the options symbol dynamically, it was stored in the
  newly allocated volume options structure and the structure's list head was
  added to the xlator's volume_options list. But while freeing the structure
  the list was not deleted. Thus when the list was traversed, already freed
  structure was accessed leading to segfault.

Change-Id: I3e9e51dd2099e34b206199eae7ba44d9d88a86ad
BUG: 922877
Signed-off-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4687
Reviewed-by: Amar Tumballi &lt;amarts@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-on: http://review.gluster.org/4818
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>dict: Put "goto out" in dict_unserialize to avoid process crash</title>
<updated>2013-04-12T07:20:30+00:00</updated>
<author>
<name>Venkatesh Somyajulu</name>
<email>vsomyaju@redhat.com</email>
</author>
<published>2013-04-03T12:01:57+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=1a40438f210ecab26a8fa1e3ca8fce01cf0623dc'/>
<id>1a40438f210ecab26a8fa1e3ca8fce01cf0623dc</id>
<content type='text'>
Problem:
In the dictionary serialization function, if the
[(buf + vallen) &gt; (orig_buf + size)], then memdup is getting failed.

Fix:
Put "goto out" whenever this condition is met.

Change-Id: I8c07dd5187364ccd6ad7625e2e3907d8b56447a9
BUG: 947824
Signed-off-by: Venkatesh Somyajulu &lt;vsomyaju@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4771
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Problem:
In the dictionary serialization function, if the
[(buf + vallen) &gt; (orig_buf + size)], then memdup is getting failed.

Fix:
Put "goto out" whenever this condition is met.

Change-Id: I8c07dd5187364ccd6ad7625e2e3907d8b56447a9
BUG: 947824
Signed-off-by: Venkatesh Somyajulu &lt;vsomyaju@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4771
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs/dict: fix infinite loop in dict_keys_join()</title>
<updated>2013-03-27T18:06:55+00:00</updated>
<author>
<name>Vijaykumar Koppad</name>
<email>vkoppad@redhat.com</email>
</author>
<published>2013-03-27T08:44:41+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=158e51c17c0f569a1c107fa8747cf8d3fdb76b6d'/>
<id>158e51c17c0f569a1c107fa8747cf8d3fdb76b6d</id>
<content type='text'>
         - missing "pairs = next" caused infinite loop

Change-Id: I3edc4f50473f7498815c73e1066167392718fddf
BUG: 905871
Signed-off-by: Vijaykumar Koppad &lt;vkoppad@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4728
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
         - missing "pairs = next" caused infinite loop

Change-Id: I3edc4f50473f7498815c73e1066167392718fddf
BUG: 905871
Signed-off-by: Vijaykumar Koppad &lt;vkoppad@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4728
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Anand Avati &lt;avati@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>glusterd: Increasing throughput of synctask based mgmt ops.</title>
<updated>2013-03-07T06:06:30+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2013-02-20T09:14:23+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=8224bc6111b3bf5a710b6e5315b39b85904f3fe1'/>
<id>8224bc6111b3bf5a710b6e5315b39b85904f3fe1</id>
<content type='text'>
Change-Id: Ibd963f78707b157fc4c9729aa87206cfd5ecfe81
BUG: 913662
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4638
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: Ibd963f78707b157fc4c9729aa87206cfd5ecfe81
BUG: 913662
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4638
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>synctask: support for (assymetric) counted barriers</title>
<updated>2013-03-07T05:32:34+00:00</updated>
<author>
<name>Krishnan Parthasarathi</name>
<email>kparthas@redhat.com</email>
</author>
<published>2013-03-06T06:15:48+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=6f2dc529faba92f10a5fee618bed05ebf752ef9e'/>
<id>6f2dc529faba92f10a5fee618bed05ebf752ef9e</id>
<content type='text'>
[Backport of Avati's patch on master - http://review.gluster.org/4558]
This patch introduces a new set of primitives:

  - synctask_barrier_init (stub)
  - synctask_barrier_waitfor (stub, count)
  - synctask_barrier_wake (stub)

Unlike pthread_barrier_t, this barrier has an explicit notion of
"waiter" and "waker". The "waiter" waits for @count number of
"wakers" to call synctask_barrier_wake() before returning. The
wait performed by the waiter via synctask_barrier_waitfor() is
co-operative in nature and yields the thread for scheduling other
synctasks in the mean time.

Intended use case:

  Eliminate excessive serialization in glusterd and allow for
concurrent RPC transactions.

  Code which are currently in this format:

---old---

  list_for_each_entry (peerinfo, peers, op_peers_list) {
          ...
          GD_SYNCOP (peerinfo-&gt;rpc, stub, rpc_cbk, ...);
  }

  ...

  int rpc_cbk (rpc, stub, ...)
  {
          ...
          __wake (stub);
  }

---old---

  Can be restructred into the format:

---new---

  synctask_barrier_init (stub);
  {
          list_for_each_entry (peerinfo, peers, op_peers_list) {
                  ...
                  rpc_submit (peerinfo-&gt;rpc, stub, rpc_cbk, ...);
                  count++;
           }
   }
   synctask_barrier_wait (stub, count);

   ...

   int rpc_cbk (rpc, stub, ...)
   {
           ...
           synctask_barrier_wake (stub);
   }

---new---

  In the above structure, from the synctask's point of view, the region
between synctask_barrier_init() and synctask_barrier_wait() are spawning
off asynchronous "threads" (or RPC) and keep count of how many such
threads have been spawned. Each of those threads are expected to make
one call to synctask_barrier_wake(). The call to synctask_barrier_wait()
makes the synctask thread co-operatively wait/sleep till @count such threads
call their wake function.

  This way, the synctask thread retains the "synchronous" flow in the code,
yet at the same time allows for asynchronous "threads" to acheive parallelism
over RPC.

Change-Id: Ie037f99b2d306b71e63e3a56353daec06fb0bf41
BUG: 913662
Signed-off-by: Anand Avati &lt;avati@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4636
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[Backport of Avati's patch on master - http://review.gluster.org/4558]
This patch introduces a new set of primitives:

  - synctask_barrier_init (stub)
  - synctask_barrier_waitfor (stub, count)
  - synctask_barrier_wake (stub)

Unlike pthread_barrier_t, this barrier has an explicit notion of
"waiter" and "waker". The "waiter" waits for @count number of
"wakers" to call synctask_barrier_wake() before returning. The
wait performed by the waiter via synctask_barrier_waitfor() is
co-operative in nature and yields the thread for scheduling other
synctasks in the mean time.

Intended use case:

  Eliminate excessive serialization in glusterd and allow for
concurrent RPC transactions.

  Code which are currently in this format:

---old---

  list_for_each_entry (peerinfo, peers, op_peers_list) {
          ...
          GD_SYNCOP (peerinfo-&gt;rpc, stub, rpc_cbk, ...);
  }

  ...

  int rpc_cbk (rpc, stub, ...)
  {
          ...
          __wake (stub);
  }

---old---

  Can be restructred into the format:

---new---

  synctask_barrier_init (stub);
  {
          list_for_each_entry (peerinfo, peers, op_peers_list) {
                  ...
                  rpc_submit (peerinfo-&gt;rpc, stub, rpc_cbk, ...);
                  count++;
           }
   }
   synctask_barrier_wait (stub, count);

   ...

   int rpc_cbk (rpc, stub, ...)
   {
           ...
           synctask_barrier_wake (stub);
   }

---new---

  In the above structure, from the synctask's point of view, the region
between synctask_barrier_init() and synctask_barrier_wait() are spawning
off asynchronous "threads" (or RPC) and keep count of how many such
threads have been spawned. Each of those threads are expected to make
one call to synctask_barrier_wake(). The call to synctask_barrier_wait()
makes the synctask thread co-operatively wait/sleep till @count such threads
call their wake function.

  This way, the synctask thread retains the "synchronous" flow in the code,
yet at the same time allows for asynchronous "threads" to acheive parallelism
over RPC.

Change-Id: Ie037f99b2d306b71e63e3a56353daec06fb0bf41
BUG: 913662
Signed-off-by: Anand Avati &lt;avati@redhat.com&gt;
Signed-off-by: Krishnan Parthasarathi &lt;kparthas@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4636
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>synctask: implement setuid-like SYNCTASK_SETID()</title>
<updated>2013-03-04T10:42:19+00:00</updated>
<author>
<name>shishir gowda</name>
<email>sgowda@redhat.com</email>
</author>
<published>2013-02-15T06:22:41+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=184adfd07e437d9531ebea88208d14c11f51137e'/>
<id>184adfd07e437d9531ebea88208d14c11f51137e</id>
<content type='text'>
synctasks can now call SYNCTASK_SETID(uid,gid) to set the effective
uid/gid of the frame with which the FOP will be performed.

Once called, the uid/gid is set either till the end of the synctask
or till the next call of SYNCTASK_SETID()

Back-porting Avati's patch http://review.gluster.org/#change,4269

BUG: 884597
Change-Id: Id0569da4bb8959636881457217fe004bf30c5b9d
Signed-off-by: shishir gowda &lt;sgowda@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4611
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
synctasks can now call SYNCTASK_SETID(uid,gid) to set the effective
uid/gid of the frame with which the FOP will be performed.

Once called, the uid/gid is set either till the end of the synctask
or till the next call of SYNCTASK_SETID()

Back-porting Avati's patch http://review.gluster.org/#change,4269

BUG: 884597
Change-Id: Id0569da4bb8959636881457217fe004bf30c5b9d
Signed-off-by: shishir gowda &lt;sgowda@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4611
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>libglusterfs: Fix memory leaks in fd_lk_insert_and_merge</title>
<updated>2013-03-03T14:19:26+00:00</updated>
<author>
<name>Vijay Bellur</name>
<email>vbellur@redhat.com</email>
</author>
<published>2013-02-16T14:40:24+00:00</published>
<link rel='alternate' type='text/html' href='http://dev.gluster.org/cgit/glusterfs-afrv1.git/commit/?id=e776deed24645cc52b0fab46d566c91b4163adc1'/>
<id>e776deed24645cc52b0fab46d566c91b4163adc1</id>
<content type='text'>
Change-Id: I666664895fdd7c7199797796819e652557a7ac99
BUG: 834465
Signed-off-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4529
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change-Id: I666664895fdd7c7199797796819e652557a7ac99
BUG: 834465
Signed-off-by: Vijay Bellur &lt;vbellur@redhat.com&gt;
Reviewed-on: http://review.gluster.org/4529
Reviewed-by: Raghavendra Bhat &lt;raghavendra@redhat.com&gt;
Tested-by: Gluster Build System &lt;jenkins@build.gluster.com&gt;
Reviewed-by: Jeff Darcy &lt;jdarcy@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
