libbinder: RPC clear behavior

Previously, when RpcState cleared its state in response to an
error, there were two issues that might happen related to proxy
destruction:
- a BpBinder could have lost its last strong ref on another thread
  but not have the destruction reflected in RpcSession yet. This
  is the issue causing crashes in the callback test (when the call
  is oneway, callback is not oneway, and the call is delayed)
- this code could run the BpBinder destructor if mNodeForBinder is
  the last wp<> holder of BpBinder (which has object lifetime weak).
  This could cause an issue hypothetically if an attached object
  (via attachObject) made binder calls in its destructor. In order
  to prevent this, 'binder' is held onto, instead of 'node.sentRef'.

Fixes: 237330627
Test: (running for several minutes)
    m binderRpcTest && adb sync && adb shell "while /data/nativetest64/binderRpcTest/binderRpcTest --gtest_filter="*Callbacks*"; do logcat -c; done

Change-Id: I21a702217b0749932d77c3acf11e879ee77dd22b
diff --git a/libs/binder/RpcState.cpp b/libs/binder/RpcState.cpp
index 1c5654c..d063001 100644
--- a/libs/binder/RpcState.cpp
+++ b/libs/binder/RpcState.cpp
@@ -244,33 +244,31 @@
                             "New state should be impossible after terminating!");
         return;
     }
+    mTerminated = true;
 
     if (SHOULD_LOG_RPC_DETAIL) {
         ALOGE("RpcState::clear()");
         dumpLocked();
     }
 
-    // if the destructor of a binder object makes another RPC call, then calling
-    // decStrong could deadlock. So, we must hold onto these binders until
-    // mNodeMutex is no longer taken.
-    std::vector<sp<IBinder>> tempHoldBinder;
-
-    mTerminated = true;
+    // invariants
     for (auto& [address, node] : mNodeForAddress) {
-        sp<IBinder> binder = node.binder.promote();
-        LOG_ALWAYS_FATAL_IF(binder == nullptr,
-                            "Binder expected to be owned with address: %" PRIu64 " %s", address,
-                            node.toString().c_str());
-
-        if (node.sentRef != nullptr) {
-            tempHoldBinder.push_back(node.sentRef);
+        bool guaranteedHaveBinder = node.timesSent > 0;
+        if (guaranteedHaveBinder) {
+            LOG_ALWAYS_FATAL_IF(node.sentRef == nullptr,
+                                "Binder expected to be owned with address: %" PRIu64 " %s", address,
+                                node.toString().c_str());
         }
     }
 
-    mNodeForAddress.clear();
+    // if the destructor of a binder object makes another RPC call, then calling
+    // decStrong could deadlock. So, we must hold onto these binders until
+    // mNodeMutex is no longer taken.
+    auto temp = std::move(mNodeForAddress);
+    mNodeForAddress.clear(); // RpcState isn't reusable, but for future/explicit
 
     _l.unlock();
-    tempHoldBinder.clear(); // explicit
+    temp.clear(); // explicit
 }
 
 void RpcState::dumpLocked() {
diff --git a/libs/binder/tests/binderRpcTest.cpp b/libs/binder/tests/binderRpcTest.cpp
index fca3c29..8afb403 100644
--- a/libs/binder/tests/binderRpcTest.cpp
+++ b/libs/binder/tests/binderRpcTest.cpp
@@ -1029,13 +1029,6 @@
                 // since this session has an incoming connection w/ a threadpool, we
                 // need to manually shut it down
                 EXPECT_TRUE(proc.proc.sessions.at(0).session->shutdownAndWait(true));
-
-                proc.proc.host.setCustomExitStatusCheck([](int wstatus) {
-                    // Flaky. Sometimes gets SIGABRT.
-                    EXPECT_TRUE((WIFEXITED(wstatus) && WEXITSTATUS(wstatus) == 0) ||
-                                (WIFSIGNALED(wstatus) && WTERMSIG(wstatus) == SIGABRT))
-                            << "server process failed: " << WaitStatusToString(wstatus);
-                });
                 proc.expectAlreadyShutdown = true;
             }
         }