Fix audio timestamp computation for pause, stop, and dynamic speed changes

Timestamp on pause and underrun (stop) do not reflect actual position.
Timestamps do not account for dynamic changes to track speed / sample rate.

Bug: 11085154
Bug: 17552775
Change-Id: I0e5e40ab3eaee82f0c91b9f399089698a0b1947e
diff --git a/services/audioflinger/AudioFlinger.h b/services/audioflinger/AudioFlinger.h
index 62a3115..f2f11e3 100644
--- a/services/audioflinger/AudioFlinger.h
+++ b/services/audioflinger/AudioFlinger.h
@@ -57,6 +57,7 @@
 #include "AudioStreamOut.h"
 #include "SpdifStreamOut.h"
 #include "AudioHwDevice.h"
+#include "LinearMap.h"
 
 #include <powermanager/IPowerManager.h>
 
diff --git a/services/audioflinger/LinearMap.h b/services/audioflinger/LinearMap.h
new file mode 100644
index 0000000..fca14dd
--- /dev/null
+++ b/services/audioflinger/LinearMap.h
@@ -0,0 +1,366 @@
+/*
+ * Copyright 2015 The Android Open Source Project
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef ANDROID_LINEAR_MAP_H
+#define ANDROID_LINEAR_MAP_H
+
+#include <stdint.h>
+
+namespace android {
+
+/*
+A general purpose lookup utility that defines a mapping between X and Y as a
+continuous set of line segments with shared (x, y) end-points.
+The (x, y) points must be added in order, monotonically increasing in both x and y;
+a log warning is emitted if this does not happen (See general usage notes below).
+
+A limited history of (x, y) points is kept for space reasons (See general usage notes).
+
+In AudioFlinger, we use the LinearMap to associate track frames to
+sink frames.  When we want to obtain a client track timestamp, we first
+get a timestamp from the sink.  The sink timestamp's position (mPosition)
+corresponds to the sink frames written. We use LinearMap to figure out which track frame
+the sink frame corresponds to. This allows us to substitute a track frame for the
+the sink frame (keeping the mTime identical) and return that timestamp back to the client.
+
+The method findX() can be used to retrieve an x value from a given y value and is
+used for timestamps, similarly for findY() which is provided for completeness.
+
+We update the (track frame, sink frame) points in the LinearMap each time we write data
+to the sink by the AudioFlinger PlaybackThread (MixerThread).
+
+
+AudioFlinger Timestamp Notes:
+
+1) Example: Obtaining a track timestamp during playback.  In this case, the LinearMap
+looks something like this:
+
+Track Frame    Sink Frame
+(track start)
+0              50000  (track starts here, the sink may already be running)
+1000           51000
+2000           52000
+
+When we request a track timestamp, we call the sink getTimestamp() and get for example
+mPosition = 51020.  Using the LinearMap, we find we have played to track frame 1020.
+We substitute the sink mPosition of 51020 with the track position 1020,
+and return that timestamp to the app.
+
+2) Example: Obtaining a track timestamp duing pause. In this case, the LinearMap
+looks something like this:
+
+Track Frame    Sink Frame
+... (some time has gone by)
+15000          30000
+16000          31000
+17000          32000
+(pause here)
+(suppose we call sink getTimestamp() here and get sink mPosition = 31100; that means
+        we have played to track frame 16100.  The track timestamp mPosition will
+        continue to advance until the sink timestamp returns a value of mPosition
+        greater than 32000, corresponding to track frame 17000 when the pause was called).
+17000          33000
+17000          34000
+...
+
+3) If the track underruns, it appears as if a pause was called on that track.
+
+4) If there is an underrun in the HAL layer, then it may be possible that
+the sink getTimestamp() will return a value greater than the number of frames written
+(it should always be less). This should be rare, if not impossible by some
+HAL implementations of the sink getTimestamp. In that case, timing is lost
+and we will return the most recent track frame written.
+
+5) When called with no points in the map, findX() returns the start value (default 0).
+This is consistent with starting after a stop() or flush().
+
+6) Resuming after Track standby will be similar to coming out of pause, as the HAL ensures
+framesWritten() and getTimestamp() are contiguous for non-offloaded/direct tracks.
+
+7) LinearMap works for different speeds and sample rates as it uses
+linear interpolation. Since AudioFlinger only updates speed and sample rate
+exactly at the sample points pushed into the LinearMap, the returned values
+from findX() and findY() are accurate regardless of how many speed or sample
+rate changes are made, so long as the coordinate looked up is within the
+sample history.
+
+General usage notes:
+
+1) In order for the LinearMap to work reliably, you cannot look backwards more
+than the size of its circular buffer history, set upon creation (typically 16).
+If you look back further, the position is extrapolated either from a passed in
+extrapolation parameter or from the oldest line segment.
+
+2) Points must monotonically increase in x and y. The increment between adjacent
+points cannot be greater than signed 32 bits. Wrap in the x, y coordinates are supported,
+since we use differences in our computation.
+
+3) If the frame data is discontinuous (due to stop or flush) call reset() to clear
+the sample counter.
+
+4) If (x, y) are not strictly monotonic increasing, i.e. (x2 > x1) and (y2 > y1),
+then one or both of the inverses y = f(x) or x = g(y) may have multiple solutions.
+In that case, the most recent solution is returned by findX() or findY().  We
+do not warn if (x2 == x1) or (y2 == y1), but we do logcat warn if (x2 < x1) or
+(y2 < y1).
+
+5) Due to rounding it is possible x != findX(findY(x)) or y != findY(findX(y))
+even when the inverse exists. Nevertheless, the values should be close.
+
+*/
+
+template <typename T>
+class LinearMap {
+public:
+    // This enumeration describes the reliability of the findX() or findY() estimation
+    // in descending order.
+    enum FindMethod {
+        FIND_METHOD_INTERPOLATION,           // High reliability (errors due to rounding)
+        FIND_METHOD_FORWARD_EXTRAPOLATION,   // Reliability based on no future speed changes
+        FIND_METHOD_BACKWARD_EXTRAPOLATION,  // Reliability based on prior estimated speed
+        FIND_METHOD_START_VALUE,             // No samples in history, using start value
+    };
+
+    LinearMap(size_t size)
+            : mSize(size),
+              mPos(0), // a circular buffer, so could start anywhere. the first sample is at 1.
+              mSamples(0),
+              // mStepValid(false),      // only valid if mSamples > 1
+              // mExtrapolateTail(false), // only valid if mSamples > 0
+              mX(new T[size]),
+              mY(new T[size]) { }
+
+    ~LinearMap() {
+        delete[] mX;
+        delete[] mY;
+    }
+
+    // Add a new sample point to the linear map.
+    //
+    // The difference between the new sample and the previous sample
+    // in the x or y coordinate must be less than INT32_MAX for purposes
+    // of the linear interpolation or extrapolation.
+    //
+    // The value should be monotonic increasing (e.g. diff >= 0);
+    // logcat warnings are issued if they are not.
+    __attribute__((no_sanitize("integer")))
+    void push(T x, T y) {
+        // Assumption: we assume x, y are monotonic increasing values,
+        // which (can) wrap in precision no less than 32 bits and have
+        // "step" or differences between adjacent points less than 32 bits.
+
+        if (mSamples > 0) {
+            const bool lastStepValid = mStepValid;
+            int32_t xdiff;
+            int32_t ydiff;
+            // check difference assumption here
+            mStepValid = checkedDiff(&xdiff, x, mX[mPos], "x")
+                    & /* bitwise AND to always warn for ydiff, though logical AND is also OK */
+                    checkedDiff(&ydiff, y, mY[mPos], "y");
+
+            // Optimization: do not add a new sample if the line segment would
+            // simply extend the previous line segment.  This extends the useful
+            // history by removing redundant points.
+            if (mSamples > 1 && mStepValid && lastStepValid) {
+                const size_t prev = previousPosition();
+                const int32_t xdiff2 = x - mX[prev];
+                const int32_t ydiff2 = y - mY[prev];
+
+                // if both current step and previous step are valid (non-negative and
+                // less than INT32_MAX for precision greater than 4 bytes)
+                // then the sum of the two steps is valid when the
+                // int32_t difference is non-negative.
+                if (xdiff2 >= 0 && ydiff2 >= 0
+                        && (int64_t)xdiff2 * ydiff == (int64_t)ydiff2 * xdiff) {
+                    // ALOGD("reusing sample! (%u, %u) sample depth %zd", x, y, mSamples);
+                    mX[mPos] = x;
+                    mY[mPos] = y;
+                    return;
+                }
+            }
+        }
+        if (++mPos >= mSize) {
+            mPos = 0;
+        }
+        if (mSamples < mSize) {
+            mExtrapolateTail = false;
+            ++mSamples;
+        } else {
+            // we enable extrapolation beyond the oldest sample
+            // if the sample buffers are completely full and we
+            // no longer know the full history.
+            mExtrapolateTail = true;
+        }
+        mX[mPos] = x;
+        mY[mPos] = y;
+    }
+
+    // clear all samples from the circular array
+    void reset() {
+        // no need to reset mPos, we use a circular buffer.
+        // computed values such as mStepValid are set after a subsequent push().
+        mSamples = 0;
+    }
+
+    // returns true if LinearMap contains at least one sample.
+    bool hasData() const {
+        return mSamples != 0;
+    }
+
+    // find the corresponding X point from a Y point.
+    // See findU for details.
+    __attribute__((no_sanitize("integer")))
+    T findX(T y, FindMethod *method = NULL, double extrapolation = 0.0, T startValue = 0) const {
+        return findU(y, mX, mY, method, extrapolation, startValue);
+    }
+
+    // find the corresponding Y point from a X point.
+    // See findU for details.
+    __attribute__((no_sanitize("integer")))
+    T findY(T x, FindMethod *method = NULL, double extrapolation = 0.0, T startValue = 0) const {
+        return findU(x, mY, mX, method, extrapolation, startValue);
+    }
+
+protected:
+
+    // returns false if the diff is out of int32_t bounds or negative.
+    __attribute__((no_sanitize("integer")))
+    static inline bool checkedDiff(int32_t *diff, T x2, T x1, const char *coord) {
+        if (sizeof(T) >= 8) {
+            const int64_t diff64 = x2 - x1;
+            *diff = (int32_t)diff64;  // intentionally lose precision
+            if (diff64 > INT32_MAX) {
+                ALOGW("LinearMap: %s overflow diff(%lld) from %llu - %llu exceeds INT32_MAX",
+                        coord, (long long)diff64,
+                        (unsigned long long)x2, (unsigned long long)x1);
+                return false;
+            } else if (diff64 < 0) {
+                ALOGW("LinearMap: %s negative diff(%lld) from %llu - %llu",
+                        coord, (long long)diff64,
+                        (unsigned long long)x2, (unsigned long long)x1);
+                return false;
+            }
+            return true;
+        }
+        // for 32 bit integers we cannot detect overflow (it
+        // shows up as a negative difference).
+        *diff = x2 - x1;
+        if (*diff < 0) {
+            ALOGW("LinearMap: %s negative diff(%d) from %u - %u",
+                    coord, *diff, (unsigned)x2, (unsigned)x1);
+            return false;
+        }
+        return true;
+    }
+
+    // Returns the previous position in the mSamples array
+    // going backwards back steps.
+    //
+    // Parameters:
+    //   back: number of backward steps, cannot be less than zero or greater than mSamples.
+    //
+    __attribute__((no_sanitize("integer")))
+    size_t previousPosition(ssize_t back = 1) const {
+        LOG_ALWAYS_FATAL_IF(back < 0 || (size_t)back > mSamples, "Invalid back(%zd)", back);
+        ssize_t position = mPos - back;
+        if (position < 0) position += mSize;
+        return (size_t)position;
+    }
+
+    // A generic implementation of finding the "other coordinate" with coordinates
+    // (u, v) = (x, y) or (u, v) = (y, x).
+    //
+    // Parameters:
+    //   uArray: the u axis samples.
+    //   vArray: the v axis samples.
+    //   method: [out] how the returned value was computed.
+    //   extrapolation: the slope used when extrapolating from the
+    //     first sample value or the last sample value in the history.
+    //     If mExtrapolateTail is set, the slope of the last line segment
+    //     is used if the extrapolation parameter is zero to continue the tail of history.
+    //     At this time, we do not use a different value for forward extrapolation from the
+    //     head of history from backward extrapolation from the tail of history.
+    //     TODO: back extrapolation value could be stored along with mX, mY in history.
+    //   startValue: used only when there are no samples in history. One can detect
+    //     whether there are samples in history by the method hasData().
+    //
+    __attribute__((no_sanitize("integer")))
+    T findU(T v, T *uArray, T *vArray, FindMethod *method,
+            double extrapolation, T startValue) const {
+        if (mSamples == 0) {
+            if (method != NULL) {
+                *method = FIND_METHOD_START_VALUE;
+            }
+            return startValue;  // nothing yet
+        }
+        ssize_t previous = 0;
+        int32_t diff = 0;
+        for (ssize_t i = 0; i < (ssize_t)mSamples; ++i) {
+            size_t current = previousPosition(i);
+
+            // Assumption: even though the type "T" may have precision greater
+            // than 32 bits, the difference between adjacent points is limited to 32 bits.
+            diff = v - vArray[current];
+            if (diff >= 0 ||
+                    (i == (ssize_t)mSamples - 1 && mExtrapolateTail && extrapolation == 0.0)) {
+                // ALOGD("depth = %zd out of %zd", i, limit);
+                if (i == 0) {
+                    if (method != NULL) {
+                        *method = FIND_METHOD_FORWARD_EXTRAPOLATION;
+                    }
+                    return uArray[current] + diff * extrapolation;
+                }
+                // interpolate / extrapolate: For this computation, we
+                // must use differentials here otherwise we have inconsistent
+                // values on modulo wrap. previous is always valid here since
+                // i > 0.  we also perform rounding with the assumption
+                // that uStep, vStep, and diff are non-negative.
+                int32_t uStep = uArray[previous] - uArray[current]; // non-negative
+                int32_t vStep = vArray[previous] - vArray[current]; // positive
+                T u = uStep <= 0 || vStep <= 0 ?  // we do not permit negative ustep or vstep
+                        uArray[current]
+                      : ((int64_t)diff * uStep + (vStep >> 1)) / vStep + uArray[current];
+                // ALOGD("u:%u  diff:%d  uStep:%d  vStep:%d  u_current:%d",
+                //         u, diff, uStep, vStep, uArray[current]);
+                if (method != NULL) {
+                    *method = (diff >= 0) ?
+                            FIND_METHOD_INTERPOLATION : FIND_METHOD_BACKWARD_EXTRAPOLATION;
+                }
+                return u;
+            }
+            previous = current;
+        }
+        // previous is always valid here.
+        if (method != NULL) {
+            *method = FIND_METHOD_BACKWARD_EXTRAPOLATION;
+        }
+        return uArray[previous] + diff * extrapolation;
+    }
+
+private:
+    const size_t    mSize;      // Size of mX and mY arrays (history).
+    size_t          mPos;       // Index in mX and mY of last pushed data;
+                                // (incremented after push) [0, mSize - 1].
+    size_t          mSamples;   // Number of valid samples in the array [0, mSize].
+    bool            mStepValid; // Last sample step was valid (non-negative)
+    bool            mExtrapolateTail; // extrapolate tail using oldest line segment
+    T * const       mX;         // History of X values as a circular array.
+    T * const       mY;         // History of Y values as a circular array.
+};
+
+} // namespace android
+
+#endif // ANDROID_LINEAR_MAP_H
diff --git a/services/audioflinger/PlaybackTracks.h b/services/audioflinger/PlaybackTracks.h
index 1450ca1..c81bbf9 100644
--- a/services/audioflinger/PlaybackTracks.h
+++ b/services/audioflinger/PlaybackTracks.h
@@ -100,6 +100,8 @@
     void flushAck();
     bool isResumePending();
     void resumeAck();
+    void updateTrackFrameInfo(uint32_t trackFramesReleased, uint32_t sinkFramesWritten,
+            AudioTimestamp *timeStamp = NULL);
 
     sp<IMemory> sharedBuffer() const { return mSharedBuffer; }
 
@@ -137,6 +139,12 @@
     size_t              mPresentationCompleteFrames; // number of frames written to the
                                     // audio HAL when this track will be fully rendered
                                     // zero means not monitoring
+
+    // access these three variables only when holding thread lock.
+    LinearMap<uint32_t> mFrameMap;           // track frame to server frame mapping
+    bool                mSinkTimestampValid; // valid cached timestamp
+    AudioTimestamp      mSinkTimestamp;
+
 private:
     // The following fields are only for fast tracks, and should be in a subclass
     int                 mFastIndex; // index within FastMixerState::mFastTracks[];
diff --git a/services/audioflinger/Threads.cpp b/services/audioflinger/Threads.cpp
index 2fd5758..70c5e1d 100644
--- a/services/audioflinger/Threads.cpp
+++ b/services/audioflinger/Threads.cpp
@@ -1579,9 +1579,7 @@
         mScreenState(AudioFlinger::mScreenState),
         // index 0 is reserved for normal mixer's submix
         mFastTrackAvailMask(((1 << FastMixerState::kMaxFastTracks) - 1) & ~1),
-        mHwSupportsPause(false), mHwPaused(false), mFlushPending(false),
-        // mLatchD, mLatchQ,
-        mLatchDValid(false), mLatchQValid(false)
+        mHwSupportsPause(false), mHwPaused(false), mFlushPending(false)
 {
     snprintf(mThreadName, kThreadNameLength, "AudioOut_%X", id);
     mNBLogWriter = audioFlinger->newWriter_l(kLogSize, mThreadName);
@@ -2554,16 +2552,6 @@
         } else {
             bytesWritten = framesWritten;
         }
-        mLatchDValid = false;
-        status_t status = mNormalSink->getTimestamp(mLatchD.mTimestamp);
-        if (status == NO_ERROR) {
-            size_t totalFramesWritten = mNormalSink->framesWritten();
-            if (totalFramesWritten >= mLatchD.mTimestamp.mPosition) {
-                mLatchD.mUnpresentedFrames = totalFramesWritten - mLatchD.mTimestamp.mPosition;
-                // mLatchD.mFramesReleased is set immediately before D is clocked into Q
-                mLatchDValid = true;
-            }
-        }
     // otherwise use the HAL / AudioStreamOut directly
     } else {
         // Direct output and offload threads
@@ -2869,21 +2857,47 @@
             }
 
             // Gather the framesReleased counters for all active tracks,
-            // and latch them atomically with the timestamp.
-            // FIXME We're using raw pointers as indices. A unique track ID would be a better index.
-            mLatchD.mFramesReleased.clear();
-            size_t size = mActiveTracks.size();
-            for (size_t i = 0; i < size; i++) {
-                sp<Track> t = mActiveTracks[i].promote();
-                if (t != 0) {
-                    mLatchD.mFramesReleased.add(t.get(),
-                            t->mAudioTrackServerProxy->framesReleased());
+            // and associate with the sink frames written out.  We need
+            // this to convert the sink timestamp to the track timestamp.
+            if (mNormalSink != 0) {
+                bool updateTracks = true;
+                bool cacheTimestamp = false;
+                AudioTimestamp timeStamp;
+                // FIXME: Use a 64 bit mNormalSink->framesWritten() counter.
+                // At this time, we must always use cached timestamps even when
+                // going through mPipeSink (which is non-blocking). The reason is that
+                // the track may be removed from the active list for many hours and
+                // the mNormalSink->framesWritten() will wrap making the linear
+                // mapping fail.
+                //
+                // (Also mAudioTrackServerProxy->framesReleased() needs to be
+                // updated to 64 bits for 64 bit frame position.)
+                //
+                if (true /* see comment above, should be: mNormalSink == mOutputSink */) {
+                    // If we use a hardware device, we must cache the sink timestamp now.
+                    // hardware devices can block timestamp access during data writes.
+                    if (mNormalSink->getTimestamp(timeStamp) == NO_ERROR) {
+                        cacheTimestamp = true;
+                    } else {
+                        updateTracks = false;
+                    }
                 }
-            }
-            if (mLatchDValid) {
-                mLatchQ = mLatchD;
-                mLatchDValid = false;
-                mLatchQValid = true;
+                if (updateTracks) {
+                    // sinkFramesWritten for non-offloaded tracks are contiguous
+                    // even after standby() is called. This is useful for the track frame
+                    // to sink frame mapping.
+                    const uint32_t sinkFramesWritten = mNormalSink->framesWritten();
+                    const size_t size = mActiveTracks.size();
+                    for (size_t i = 0; i < size; ++i) {
+                        sp<Track> t = mActiveTracks[i].promote();
+                        if (t != 0 && !t->isFastTrack()) {
+                            t->updateTrackFrameInfo(
+                                    t->mAudioTrackServerProxy->framesReleased(),
+                                    sinkFramesWritten,
+                                    cacheTimestamp ? &timeStamp : NULL);
+                        }
+                    }
+                }
             }
 
             saveOutputTracks();
diff --git a/services/audioflinger/Threads.h b/services/audioflinger/Threads.h
index ad47277..7c92c1c 100644
--- a/services/audioflinger/Threads.h
+++ b/services/audioflinger/Threads.h
@@ -841,19 +841,6 @@
                 bool        mHwSupportsPause;
                 bool        mHwPaused;
                 bool        mFlushPending;
-private:
-    // timestamp latch:
-    //  D input is written by threadLoop_write while mutex is unlocked, and read while locked
-    //  Q output is written while locked, and read while locked
-    struct {
-        AudioTimestamp  mTimestamp;
-        uint32_t        mUnpresentedFrames;
-        KeyedVector<Track *, uint32_t> mFramesReleased;
-    } mLatchD, mLatchQ;
-    bool mLatchDValid;  // true means mLatchD is valid
-                        //     (except for mFramesReleased which is filled in later),
-                        //     and clock it into latch at next opportunity
-    bool mLatchQValid;  // true means mLatchQ is valid
 };
 
 class MixerThread : public PlaybackThread {
diff --git a/services/audioflinger/Tracks.cpp b/services/audioflinger/Tracks.cpp
index b4c1fdd..7d0366e 100644
--- a/services/audioflinger/Tracks.cpp
+++ b/services/audioflinger/Tracks.cpp
@@ -50,6 +50,10 @@
 #define ALOGVV(a...) do { } while(0)
 #endif
 
+// TODO move to a common header  (Also shared with AudioTrack.cpp)
+#define NANOS_PER_SECOND    1000000000
+#define TIME_TO_NANOS(time) ((uint64_t)time.tv_sec * NANOS_PER_SECOND + time.tv_nsec)
+
 namespace android {
 
 // ----------------------------------------------------------------------------
@@ -357,6 +361,9 @@
     mAuxBuffer(NULL),
     mAuxEffectId(0), mHasVolumeController(false),
     mPresentationCompleteFrames(0),
+    mFrameMap(16 /* sink-frame-to-track-frame map memory */),
+    mSinkTimestampValid(false),
+    // mSinkTimestamp
     mFastIndex(-1),
     mCachedVolume(1.0),
     mIsInvalid(false),
@@ -652,6 +659,11 @@
             ALOGV("? => ACTIVE (%d) on thread %p", mName, this);
         }
 
+        // states to reset position info for non-offloaded/direct tracks
+        if (!isOffloaded() && !isDirect()
+                && (state == IDLE || state == STOPPED || state == FLUSHED)) {
+            mFrameMap.reset();
+        }
         PlaybackThread *playbackThread = (PlaybackThread *)thread.get();
         if (isFastTrack()) {
             // refresh fast track underruns on start because that field is never cleared
@@ -858,36 +870,31 @@
     Mutex::Autolock _l(thread->mLock);
     PlaybackThread *playbackThread = (PlaybackThread *)thread.get();
 
-    status_t result = INVALID_OPERATION;
-    if (!isOffloaded() && !isDirect()) {
-        if (!playbackThread->mLatchQValid) {
-            return INVALID_OPERATION;
-        }
-        // FIXME Not accurate under dynamic changes of sample rate and speed.
-        // Do not use track's mSampleRate as it is not current for mixer tracks.
-        uint32_t sampleRate = mAudioTrackServerProxy->getSampleRate();
-        AudioPlaybackRate playbackRate = mAudioTrackServerProxy->getPlaybackRate();
-        uint32_t unpresentedFrames = ((double) playbackThread->mLatchQ.mUnpresentedFrames *
-                sampleRate * playbackRate.mSpeed)/ playbackThread->mSampleRate;
-        // FIXME Since we're using a raw pointer as the key, it is theoretically possible
-        //       for a brand new track to share the same address as a recently destroyed
-        //       track, and thus for us to get the frames released of the wrong track.
-        //       It is unlikely that we would be able to call getTimestamp() so quickly
-        //       right after creating a new track.  Nevertheless, the index here should
-        //       be changed to something that is unique.  Or use a completely different strategy.
-        ssize_t i = playbackThread->mLatchQ.mFramesReleased.indexOfKey(this);
-        uint32_t framesWritten = i >= 0 ?
-                playbackThread->mLatchQ.mFramesReleased[i] :
-                mAudioTrackServerProxy->framesReleased();
-        if (framesWritten >= unpresentedFrames) {
-            timestamp.mPosition = framesWritten - unpresentedFrames;
-            timestamp.mTime = playbackThread->mLatchQ.mTimestamp.mTime;
-            result = NO_ERROR;
-        }
-    } else { // offloaded or direct
-        result = playbackThread->getTimestamp_l(timestamp);
+    if (isOffloaded() || isDirect()) {
+        return playbackThread->getTimestamp_l(timestamp);
     }
 
+    if (!mFrameMap.hasData()) {
+        // WOULD_BLOCK is consistent with AudioTrack::getTimestamp() in the
+        // FLUSHED and STOPPED state.  We should only return INVALID_OPERATION
+        // when this method is not permitted due to configuration or device.
+        return WOULD_BLOCK;
+    }
+    status_t result = OK;
+    if (!mSinkTimestampValid) { // if no sink position, try to fetch again
+        result = playbackThread->getTimestamp_l(mSinkTimestamp);
+    }
+
+    if (result == OK) {
+        // Lookup the track frame corresponding to the sink frame position.
+        timestamp.mPosition = mFrameMap.findX(mSinkTimestamp.mPosition);
+        timestamp.mTime = mSinkTimestamp.mTime;
+        // ALOGD("track (server-side) timestamp: mPosition(%u)  mTime(%llu)",
+        //        timestamp.mPosition, TIME_TO_NANOS(timestamp.mTime));
+    }
+    // (Possible) FIXME: mSinkTimestamp is updated only when the track is on
+    // the Thread active list. If the track is no longer on the thread active
+    // list should we use current time?
     return result;
 }
 
@@ -1077,6 +1084,19 @@
         mResumeToStopping = false;
     }
 }
+
+//To be called with thread lock held
+void AudioFlinger::PlaybackThread::Track::updateTrackFrameInfo(
+        uint32_t trackFramesReleased, uint32_t sinkFramesWritten, AudioTimestamp *timeStamp) {
+    mFrameMap.push(trackFramesReleased, sinkFramesWritten);
+    if (timeStamp == NULL) {
+        mSinkTimestampValid = false;
+    } else {
+        mSinkTimestampValid = true;
+        mSinkTimestamp = *timeStamp;
+    }
+}
+
 // ----------------------------------------------------------------------------
 
 AudioFlinger::PlaybackThread::OutputTrack::OutputTrack(