使用Interlocked.Increment的C#对象池

问题描述 投票:2回答:6

我见过很多很好的对象池实现。例如:C# Object Pooling Pattern implementation

但似乎线程安全的人总是使用锁,从不尝试使用Interlocked。*操作。

编写一个不允许将对象返回到池中的对象(只是一个带有Interlocked.Increments指针的大数组)似乎很容易。但我想不出任何方法来编写一个让你返回对象的方法。有没有人这样做过?

c# .net multithreading locking interlocked
6个回答
2
投票

想一想为什么你还需要对象池 - 这里没有讨论被池化的对象。对于大多数对象,使用托管堆将提供必要的功能,而不会在您自己的代码中出现新池管理器的麻烦。只有当您的对象封装难以建立或难以释放的资源时,才需要考虑托管代码中的对象池。

如果你确实需要自己做,那么有一个轻量级的读/写锁可能对优化池访问很有用。

http://theburningmonk.com/2010/02/threading-using-readerwriterlockslim/


2
投票

我已经使用无锁队列构建了一个单链表。以下有一些不相关的东西被删除,我没有测试它删除的东西,但至少应该给出这个想法。

internal sealed class LockFreeQueue<T>
{
  private sealed class Node
  {
    public readonly T Item;
    public Node Next;
    public Node(T item)
    {
      Item = item;
    }
  }
  private volatile Node _head;
  private volatile Node _tail;
  public LockFreeQueue()
  {
    _head = _tail = new Node(default(T));
  }
#pragma warning disable 420 // volatile semantics not lost as only by-ref calls are interlocked
  public void Enqueue(T item)
  {
    Node newNode = new Node(item);
    for(;;)
    {
      Node curTail = _tail;
      if (Interlocked.CompareExchange(ref curTail.Next, newNode, null) == null)   //append to the tail if it is indeed the tail.
      {
        Interlocked.CompareExchange(ref _tail, newNode, curTail);   //CAS in case we were assisted by an obstructed thread.
        return;
      }
      else
      {
        Interlocked.CompareExchange(ref _tail, curTail.Next, curTail);  //assist obstructing thread.
      }
    }
  }    
  public bool TryDequeue(out T item)
  {
    for(;;)
    {
      Node curHead = _head;
      Node curTail = _tail;
      Node curHeadNext = curHead.Next;
      if (curHead == curTail)
      {
        if (curHeadNext == null)
        {
          item = default(T);
          return false;
        }
        else
          Interlocked.CompareExchange(ref _tail, curHeadNext, curTail);   // assist obstructing thread
      }
      else
      {
        item = curHeadNext.Item;
        if (Interlocked.CompareExchange(ref _head, curHeadNext, curHead) == curHead)
        {
          return true;
        }
      }
    }
  }
#pragma warning restore 420
}

如果你的汇集原因是分配和收集的原始性能考虑因素,那么这个分配和收集的事实使它变得毫无用处。如果是因为底层资源获取和/或发布成本高昂,或者因为实例缓存了“学习”信息,那么它可能适合。


1
投票

返回引用对象的问题在于它首先破坏了对其进行锁定访问的整个尝试。您不能使用基本的lock()命令来控制对象范围之外的资源的访问,这意味着传统的getter / setter设计不起作用。

可能工作的东西是一个包含可锁定资源的对象,并允许传递lambdas或委托,这将利用资源。该对象将锁定资源,运行委托,然后在委托完成时解锁。这基本上可以控制将代码运行到锁定对象的手中,但是允许比Interlocked更复杂的操作。

另一种可能的方法是公开getter和setter,但是使用“checkout”模型实现自己的访问控制;当一个线程被允许“获取”一个值时,在一个锁定的内部资源中保持对当前线程的引用。在该线程调用setter,aborts等之前,尝试访问getter的所有其他线程都保存在Yield循环中。一旦重新检入资源,下一个线程就可以获得它。

public class Library
{
   private Book controlledBook
   private Thread checkoutThread;

   public Book CheckOutTheBook()
   {
      while(Thread.CurrentThread != checkoutThread && checkoutThread.IsAlive)
          thread.CurrentThread.Yield();

      lock(this)
      {
         checkoutThread = Thread.CurrentThread;

         return controlledBook;
      }
   }

   public void CheckInTheBook(Book theBook)
   {
      if(Thread.CurrentThread != checkoutThread)
          throw new InvalidOperationException("This thread does not have the resource checked out.");

      lock(this)
      {
         checkoutThread = null;

         controlledBook = theBook;
      }
   }

}

现在,请注意,这仍然需要对象的用户之间的一些合作。特别是,这种逻辑在设定者方面相当天真;没有检查出来就无法检查书。这条规则对消费者来说可能并不明显,不正确的使用可能导致未处理的异常。此外,所有用户必须知道在他们终止之前是否会停止使用它来检查对象,即使基本的C#知识会指示如果您获得引用类型,您所做的更改会反映在任何地方。但是,这可以用作对非线程安全资源的基本“一次一个”访问控制。


1
投票

你看过.Net 4中的Concurrent集合吗?

例如http://msdn.microsoft.com/en-us/library/dd287191.aspx


0
投票

我不能看到使用Interlocked也有任何实际好处,它必须以不安全的方式使用。锁定,只是在对象的内存空间上改变一点标志 - 确实非常快。互锁是一个更好的,因为它可以在寄存器而不是在内存中完成。

您是否遇到性能问题?这些代码的主要目的是什么?在一天结束时,C#旨在从您那里抽象内存管理,以便您专注于您的业务问题。

请记住,如果您需要自己管理内存并使用不安全的指针,则必须固定内存区域=额外的性能成本。


0
投票

好问题。当使用快速对象池编写包含零分配模式的高性能软件时,这一点至关重要。

Microsoft在Apache License 2.0下发布了一个对象池

它避免使用锁,只使用Interlocked.CompareExchange进行分配(Get)。当您在大多数用例中同时获取和释放少量对象时,它似乎特别快。如果您获得大量对象,那么它似乎不太优化,然后释放批处理,如果您的应用程序的行为类似于您应该修改的那样。

我认为,正如您所建议的那样,Interlocked.Increment方法可能更通用,并且对批处理用例更有效。

http://source.roslyn.io/#Microsoft.CodeAnalysis.Workspaces/ObjectPool%25601.cs,98aa6d9b3c4e313b

// Copyright (c) Microsoft.  All Rights Reserved.  Licensed under the Apache License, Version 2.0.  See License.txt in the project root for license information.

// define TRACE_LEAKS to get additional diagnostics that can lead to the leak sources. note: it will
// make everything about 2-3x slower
// 
// #define TRACE_LEAKS

// define DETECT_LEAKS to detect possible leaks
// #if DEBUG
// #define DETECT_LEAKS  //for now always enable DETECT_LEAKS in debug.
// #endif

using System;
using System.Diagnostics;
using System.Threading;

#if DETECT_LEAKS
using System.Runtime.CompilerServices;

#endif
namespace Microsoft.CodeAnalysis.PooledObjects
{
    /// <summary>
    /// Generic implementation of object pooling pattern with predefined pool size limit. The main
    /// purpose is that limited number of frequently used objects can be kept in the pool for
    /// further recycling.
    /// 
    /// Notes: 
    /// 1) it is not the goal to keep all returned objects. Pool is not meant for storage. If there
    ///    is no space in the pool, extra returned objects will be dropped.
    /// 
    /// 2) it is implied that if object was obtained from a pool, the caller will return it back in
    ///    a relatively short time. Keeping checked out objects for long durations is ok, but 
    ///    reduces usefulness of pooling. Just new up your own.
    /// 
    /// Not returning objects to the pool in not detrimental to the pool's work, but is a bad practice. 
    /// Rationale: 
    ///    If there is no intent for reusing the object, do not use pool - just use "new". 
    /// </summary>
    internal class ObjectPool<T> where T : class
    {
        [DebuggerDisplay("{Value,nq}")]
        private struct Element
        {
            internal T Value;
        }

        /// <remarks>
        /// Not using System.Func{T} because this file is linked into the (debugger) Formatter,
        /// which does not have that type (since it compiles against .NET 2.0).
        /// </remarks>
        internal delegate T Factory();

        // Storage for the pool objects. The first item is stored in a dedicated field because we
        // expect to be able to satisfy most requests from it.
        private T _firstItem;
        private readonly Element[] _items;

        // factory is stored for the lifetime of the pool. We will call this only when pool needs to
        // expand. compared to "new T()", Func gives more flexibility to implementers and faster
        // than "new T()".
        private readonly Factory _factory;

#if DETECT_LEAKS
        private static readonly ConditionalWeakTable<T, LeakTracker> leakTrackers = new ConditionalWeakTable<T, LeakTracker>();

        private class LeakTracker : IDisposable
        {
            private volatile bool disposed;

#if TRACE_LEAKS
            internal volatile object Trace = null;
#endif

            public void Dispose()
            {
                disposed = true;
                GC.SuppressFinalize(this);
            }

            private string GetTrace()
            {
#if TRACE_LEAKS
                return Trace == null ? "" : Trace.ToString();
#else
                return "Leak tracing information is disabled. Define TRACE_LEAKS on ObjectPool`1.cs to get more info \n";
#endif
            }

            ~LeakTracker()
            {
                if (!this.disposed && !Environment.HasShutdownStarted)
                {
                    var trace = GetTrace();

                    // If you are seeing this message it means that object has been allocated from the pool 
                    // and has not been returned back. This is not critical, but turns pool into rather 
                    // inefficient kind of "new".
                    Debug.WriteLine($"TRACEOBJECTPOOLLEAKS_BEGIN\nPool detected potential leaking of {typeof(T)}. \n Location of the leak: \n {GetTrace()} TRACEOBJECTPOOLLEAKS_END");
                }
            }
        }
#endif

        internal ObjectPool(Factory factory)
            : this(factory, Environment.ProcessorCount * 2)
        { }

        internal ObjectPool(Factory factory, int size)
        {
            Debug.Assert(size >= 1);
            _factory = factory;
            _items = new Element[size - 1];
        }

        private T CreateInstance()
        {
            var inst = _factory();
            return inst;
        }

        /// <summary>
        /// Produces an instance.
        /// </summary>
        /// <remarks>
        /// Search strategy is a simple linear probing which is chosen for it cache-friendliness.
        /// Note that Free will try to store recycled objects close to the start thus statistically 
        /// reducing how far we will typically search.
        /// </remarks>
        internal T Allocate()
        {
            // PERF: Examine the first element. If that fails, AllocateSlow will look at the remaining elements.
            // Note that the initial read is optimistically not synchronized. That is intentional. 
            // We will interlock only when we have a candidate. in a worst case we may miss some
            // recently returned objects. Not a big deal.
            T inst = _firstItem;
            if (inst == null || inst != Interlocked.CompareExchange(ref _firstItem, null, inst))
            {
                inst = AllocateSlow();
            }

#if DETECT_LEAKS
            var tracker = new LeakTracker();
            leakTrackers.Add(inst, tracker);

#if TRACE_LEAKS
            var frame = CaptureStackTrace();
            tracker.Trace = frame;
#endif
#endif
            return inst;
        }

        private T AllocateSlow()
        {
            var items = _items;

            for (int i = 0; i < items.Length; i++)
            {
                // Note that the initial read is optimistically not synchronized. That is intentional. 
                // We will interlock only when we have a candidate. in a worst case we may miss some
                // recently returned objects. Not a big deal.
                T inst = items[i].Value;
                if (inst != null)
                {
                    if (inst == Interlocked.CompareExchange(ref items[i].Value, null, inst))
                    {
                        return inst;
                    }
                }
            }

            return CreateInstance();
        }

        /// <summary>
        /// Returns objects to the pool.
        /// </summary>
        /// <remarks>
        /// Search strategy is a simple linear probing which is chosen for it cache-friendliness.
        /// Note that Free will try to store recycled objects close to the start thus statistically 
        /// reducing how far we will typically search in Allocate.
        /// </remarks>
        internal void Free(T obj)
        {
            Validate(obj);
            ForgetTrackedObject(obj);

            if (_firstItem == null)
            {
                // Intentionally not using interlocked here. 
                // In a worst case scenario two objects may be stored into same slot.
                // It is very unlikely to happen and will only mean that one of the objects will get collected.
                _firstItem = obj;
            }
            else
            {
                FreeSlow(obj);
            }
        }

        private void FreeSlow(T obj)
        {
            var items = _items;
            for (int i = 0; i < items.Length; i++)
            {
                if (items[i].Value == null)
                {
                    // Intentionally not using interlocked here. 
                    // In a worst case scenario two objects may be stored into same slot.
                    // It is very unlikely to happen and will only mean that one of the objects will get collected.
                    items[i].Value = obj;
                    break;
                }
            }
        }

        /// <summary>
        /// Removes an object from leak tracking.  
        /// 
        /// This is called when an object is returned to the pool.  It may also be explicitly 
        /// called if an object allocated from the pool is intentionally not being returned
        /// to the pool.  This can be of use with pooled arrays if the consumer wants to 
        /// return a larger array to the pool than was originally allocated.
        /// </summary>
        [Conditional("DEBUG")]
        internal void ForgetTrackedObject(T old, T replacement = null)
        {
#if DETECT_LEAKS
            LeakTracker tracker;
            if (leakTrackers.TryGetValue(old, out tracker))
            {
                tracker.Dispose();
                leakTrackers.Remove(old);
            }
            else
            {
                var trace = CaptureStackTrace();
                Debug.WriteLine($"TRACEOBJECTPOOLLEAKS_BEGIN\nObject of type {typeof(T)} was freed, but was not from pool. \n Callstack: \n {trace} TRACEOBJECTPOOLLEAKS_END");
            }

            if (replacement != null)
            {
                tracker = new LeakTracker();
                leakTrackers.Add(replacement, tracker);
            }
#endif
        }

#if DETECT_LEAKS
        private static Lazy<Type> _stackTraceType = new Lazy<Type>(() => Type.GetType("System.Diagnostics.StackTrace"));

        private static object CaptureStackTrace()
        {
            return Activator.CreateInstance(_stackTraceType.Value);
        }
#endif

        [Conditional("DEBUG")]
        private void Validate(object obj)
        {
            Debug.Assert(obj != null, "freeing null?");

            Debug.Assert(_firstItem != obj, "freeing twice?");

            var items = _items;
            for (int i = 0; i < items.Length; i++)
            {
                var value = items[i].Value;
                if (value == null)
                {
                    return;
                }

                Debug.Assert(value != obj, "freeing twice?");
            }
        }
    }
}
© www.soinside.com 2019 - 2024. All rights reserved.