juliaGuix/patches/llvm-rL327898.patch

6132 lines
250 KiB
Diff

commit 64c3384f94a1eb3e3510d6f66c3bccdfc9d9050b
Author: Nirav Dave <niravd@google.com>
Date: Thu Feb 1 16:11:59 2018 +0000
r327898/dependencies roll up
This is a squash of 13 commits required in the lead up to r327898,
which fixes https://github.com/JuliaLang/julia/issues/27603. The squashed
commits are:
332d15e981e86b9e058087174bb288ba18a15807
b659d3fca5d24c25ee73f979edb382f7f24e05e2
c01d1363ea080170fc5143d72f26eecd9270f03b
eab8a177a4caef9e42ef1d2aeb4ba15dc788d3f2
bedb1391781b009ace95f5586e7fae5f03fe0689
11d041a905f82ac78e7ccf2394773e80b93d147c
e1ec36c55a0127988f42a3329ca835617b30de09
b8d2903300c13d8fd151c8e5dc71017269617539
00884fea345f47ab05174a8f314ecd60d1676d02
28ab04cec0d9888af9d29946b3a048b8340abe0f
3dd52e62ea3087efcca63c3772183d9471abc742
bd3649ff6d6b4d18b3c6de253179d987a120518a
aea03035b9c633e6d745b6d3fc5b6378699f576c
Their commit messages follow below:
[SelectionDAG] Fix UpdateChains handling of TokenFactors
Summary:
In Instruction Selection UpdateChains replaces all matched Nodes'
chain references including interior token factors and deletes them.
This may allow nodes which depend on these interior nodes but are not
part of the set of matched nodes to be left with a dangling dependence.
Avoid this by doing the replacement for matched non-TokenFactor nodes.
Fixes PR36164.
Reviewers: jonpa, RKSimon, bogner
Subscribers: llvm-commits, hiraditya
Differential Revision: https://reviews.llvm.org/D42754
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@323977 91177308-0d34-0410-b5e6-96231b3b80d8
Regenerate test result for vastart-defs-eflags.ll. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@323596 91177308-0d34-0410-b5e6-96231b3b80d8
Regenerate test result for testb-je-fusion.ll. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@323595 91177308-0d34-0410-b5e6-96231b3b80d8
[X86] Avoid using high register trick for test instruction
Summary:
It seems it's main effect is to create addition copies when values are inr register that do not support this trick, which increase register pressure and makes the code bigger.
Reviewers: craig.topper, niravd, spatel, hfinkel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D42646
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@323888 91177308-0d34-0410-b5e6-96231b3b80d8
Add a regression test for problems caused by D42646 . NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@323868 91177308-0d34-0410-b5e6-96231b3b80d8
Add test case for truncated and promotion to test. NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@323663 91177308-0d34-0410-b5e6-96231b3b80d8
[X86] Add test case to ensure testw is generated when optimizing for size. NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@323687 91177308-0d34-0410-b5e6-96231b3b80d8
[X86] Generate testl instruction through truncates.
Summary:
This was introduced in D42646 but ended up being reverted because the original implementation was buggy.
Depends on D42646
Reviewers: craig.topper, niravd, spatel, hfinkel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D42741
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@323899 91177308-0d34-0410-b5e6-96231b3b80d8
[X86] Don't look for TEST instruction shrinking opportunities when the root node is a X86ISD::SUB.
I don't believe we ever create an X86ISD::SUB with a 0 constant which is what the TEST handling needs. The ternary operator at the end of this code shows up as only going one way in the llvm-cov report from the bots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@324865 91177308-0d34-0410-b5e6-96231b3b80d8
[X86] Teach LowerBUILD_VECTOR to recognize pair-wise splats of 32-bit elements and use a 64-bit broadcast
If we are splatting pairs of 32-bit elements, we can use a 64-bit broadcast to get the job done.
We could probably could probably do this with other sizes too, for example four 16-bit elements. Or we could broadcast pairs of 16-bit elements using a 32-bit element broadcast. But I've left that as a future improvement.
I've also restricted this to AVX2 only because we can only broadcast loads under AVX.
Differential Revision: https://reviews.llvm.org/D42086
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@322730 91177308-0d34-0410-b5e6-96231b3b80d8
[DAG, X86] Revert r327197 "Revert r327170, r327171, r327172"
Reland ISel cycle checking improvements after simplifying node id
invariant traversal and correcting typo.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@327898 91177308-0d34-0410-b5e6-96231b3b80d8
[ Modified for cherry-pick: Dropped Hexagon and SystemZ changes"
[DAG, X86] Fix ISel-time node insertion ids
As in SystemZ backend, correctly propagate node ids when inserting new
unselected nodes into the DAG during instruction Seleciton for X86
target.
Fixes PR36865.
Reviewers: jyknight, craig.topper
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D44797
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@328233 91177308-0d34-0410-b5e6-96231b3b80d8
[DAG] Fix node id invalidation in Instruction Selection.
Invalidation should be bit negation. Add missing negation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@328287 91177308-0d34-0410-b5e6-96231b3b80d8
Remove failing tests
This removes tests that are failing due to codegen differences,
after the latest set of backports. Fixing thse for the backport
branch does not seem worth it.
diff --git a/include/llvm/CodeGen/SelectionDAGISel.h b/include/llvm/CodeGen/SelectionDAGISel.h
index de6849a1eae..e56eafc437c 100644
--- a/include/llvm/CodeGen/SelectionDAGISel.h
+++ b/include/llvm/CodeGen/SelectionDAGISel.h
@@ -110,6 +110,11 @@ public:
CodeGenOpt::Level OptLevel,
bool IgnoreChains = false);
+ static void InvalidateNodeId(SDNode *N);
+ static int getUninvalidatedNodeId(SDNode *N);
+
+ static void EnforceNodeIdInvariant(SDNode *N);
+
// Opcodes used by the DAG state machine:
enum BuiltinOpcodes {
OPC_Scope,
@@ -199,23 +204,28 @@ protected:
/// of the new node T.
void ReplaceUses(SDValue F, SDValue T) {
CurDAG->ReplaceAllUsesOfValueWith(F, T);
+ EnforceNodeIdInvariant(T.getNode());
}
/// ReplaceUses - replace all uses of the old nodes F with the use
/// of the new nodes T.
void ReplaceUses(const SDValue *F, const SDValue *T, unsigned Num) {
CurDAG->ReplaceAllUsesOfValuesWith(F, T, Num);
+ for (unsigned i = 0; i < Num; ++i)
+ EnforceNodeIdInvariant(T[i].getNode());
}
/// ReplaceUses - replace all uses of the old node F with the use
/// of the new node T.
void ReplaceUses(SDNode *F, SDNode *T) {
CurDAG->ReplaceAllUsesWith(F, T);
+ EnforceNodeIdInvariant(T);
}
/// Replace all uses of \c F with \c T, then remove \c F from the DAG.
void ReplaceNode(SDNode *F, SDNode *T) {
CurDAG->ReplaceAllUsesWith(F, T);
+ EnforceNodeIdInvariant(T);
CurDAG->RemoveDeadNode(F);
}
diff --git a/include/llvm/CodeGen/SelectionDAGNodes.h b/include/llvm/CodeGen/SelectionDAGNodes.h
index 522c2f1b2cb..2d974234abf 100644
--- a/include/llvm/CodeGen/SelectionDAGNodes.h
+++ b/include/llvm/CodeGen/SelectionDAGNodes.h
@@ -796,16 +796,44 @@ public:
/// searches to be performed in parallel, caching of results across
/// queries and incremental addition to Worklist. Stops early if N is
/// found but will resume. Remember to clear Visited and Worklists
- /// if DAG changes.
+ /// if DAG changes. MaxSteps gives a maximum number of nodes to visit before
+ /// giving up. The TopologicalPrune flag signals that positive NodeIds are
+ /// topologically ordered (Operands have strictly smaller node id) and search
+ /// can be pruned leveraging this.
static bool hasPredecessorHelper(const SDNode *N,
SmallPtrSetImpl<const SDNode *> &Visited,
SmallVectorImpl<const SDNode *> &Worklist,
- unsigned int MaxSteps = 0) {
+ unsigned int MaxSteps = 0,
+ bool TopologicalPrune = false) {
+ SmallVector<const SDNode *, 8> DeferredNodes;
if (Visited.count(N))
return true;
+
+ // Node Id's are assigned in three places: As a topological
+ // ordering (> 0), during legalization (results in values set to
+ // 0), new nodes (set to -1). If N has a topolgical id then we
+ // know that all nodes with ids smaller than it cannot be
+ // successors and we need not check them. Filter out all node
+ // that can't be matches. We add them to the worklist before exit
+ // in case of multiple calls. Note that during selection the topological id
+ // may be violated if a node's predecessor is selected before it. We mark
+ // this at selection negating the id of unselected successors and
+ // restricting topological pruning to positive ids.
+
+ int NId = N->getNodeId();
+ // If we Invalidated the Id, reconstruct original NId.
+ if (NId < -1)
+ NId = -(NId + 1);
+
+ bool Found = false;
while (!Worklist.empty()) {
const SDNode *M = Worklist.pop_back_val();
- bool Found = false;
+ int MId = M->getNodeId();
+ if (TopologicalPrune && M->getOpcode() != ISD::TokenFactor && (NId > 0) &&
+ (MId > 0) && (MId < NId)) {
+ DeferredNodes.push_back(M);
+ continue;
+ }
for (const SDValue &OpV : M->op_values()) {
SDNode *Op = OpV.getNode();
if (Visited.insert(Op).second)
@@ -814,11 +842,16 @@ public:
Found = true;
}
if (Found)
- return true;
+ break;
if (MaxSteps != 0 && Visited.size() >= MaxSteps)
- return false;
+ break;
}
- return false;
+ // Push deferred nodes back on worklist.
+ Worklist.append(DeferredNodes.begin(), DeferredNodes.end());
+ // If we bailed early, conservatively return found.
+ if (MaxSteps != 0 && Visited.size() >= MaxSteps)
+ return true;
+ return Found;
}
/// Return true if all the users of N are contained in Nodes.
diff --git a/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp b/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
index bd9fcfb5c1e..17e42240133 100644
--- a/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
+++ b/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
@@ -937,6 +937,58 @@ public:
} // end anonymous namespace
+// This function is used to enforce the topological node id property
+// property leveraged during Instruction selection. Before selection all
+// nodes are given a non-negative id such that all nodes have a larger id than
+// their operands. As this holds transitively we can prune checks that a node N
+// is a predecessor of M another by not recursively checking through M's
+// operands if N's ID is larger than M's ID. This is significantly improves
+// performance of for various legality checks (e.g. IsLegalToFold /
+// UpdateChains).
+
+// However, when we fuse multiple nodes into a single node
+// during selection we may induce a predecessor relationship between inputs and
+// outputs of distinct nodes being merged violating the topological property.
+// Should a fused node have a successor which has yet to be selected, our
+// legality checks would be incorrect. To avoid this we mark all unselected
+// sucessor nodes, i.e. id != -1 as invalid for pruning by bit-negating (x =>
+// (-(x+1))) the ids and modify our pruning check to ignore negative Ids of M.
+// We use bit-negation to more clearly enforce that node id -1 can only be
+// achieved by selected nodes). As the conversion is reversable the original Id,
+// topological pruning can still be leveraged when looking for unselected nodes.
+// This method is call internally in all ISel replacement calls.
+void SelectionDAGISel::EnforceNodeIdInvariant(SDNode *Node) {
+ SmallVector<SDNode *, 4> Nodes;
+ Nodes.push_back(Node);
+
+ while (!Nodes.empty()) {
+ SDNode *N = Nodes.pop_back_val();
+ for (auto *U : N->uses()) {
+ auto UId = U->getNodeId();
+ if (UId > 0) {
+ InvalidateNodeId(U);
+ Nodes.push_back(U);
+ }
+ }
+ }
+}
+
+// InvalidateNodeId - As discusses in EnforceNodeIdInvariant, mark a
+// NodeId with the equivalent node id which is invalid for topological
+// pruning.
+void SelectionDAGISel::InvalidateNodeId(SDNode *N) {
+ int InvalidId = -(N->getNodeId() + 1);
+ N->setNodeId(InvalidId);
+}
+
+// getUninvalidatedNodeId - get original uninvalidated node id.
+int SelectionDAGISel::getUninvalidatedNodeId(SDNode *N) {
+ int Id = N->getNodeId();
+ if (Id < -1)
+ return -(Id + 1);
+ return Id;
+}
+
void SelectionDAGISel::DoInstructionSelection() {
DEBUG(dbgs() << "===== Instruction selection begins: "
<< printMBBReference(*FuncInfo->MBB) << " '"
@@ -972,6 +1024,33 @@ void SelectionDAGISel::DoInstructionSelection() {
if (Node->use_empty())
continue;
+#ifndef NDEBUG
+ SmallVector<SDNode *, 4> Nodes;
+ Nodes.push_back(Node);
+
+ while (!Nodes.empty()) {
+ auto N = Nodes.pop_back_val();
+ if (N->getOpcode() == ISD::TokenFactor || N->getNodeId() < 0)
+ continue;
+ for (const SDValue &Op : N->op_values()) {
+ if (Op->getOpcode() == ISD::TokenFactor)
+ Nodes.push_back(Op.getNode());
+ else {
+ // We rely on topological ordering of node ids for checking for
+ // cycles when fusing nodes during selection. All unselected nodes
+ // successors of an already selected node should have a negative id.
+ // This assertion will catch such cases. If this assertion triggers
+ // it is likely you using DAG-level Value/Node replacement functions
+ // (versus equivalent ISEL replacement) in backend-specific
+ // selections. See comment in EnforceNodeIdInvariant for more
+ // details.
+ assert(Op->getNodeId() != -1 &&
+ "Node has already selected predecessor node");
+ }
+ }
+ }
+#endif
+
// When we are using non-default rounding modes or FP exception behavior
// FP operations are represented by StrictFP pseudo-operations. They
// need to be simplified here so that the target-specific instruction
@@ -2134,52 +2213,44 @@ static SDNode *findGlueUse(SDNode *N) {
return nullptr;
}
-/// findNonImmUse - Return true if "Use" is a non-immediate use of "Def".
-/// This function iteratively traverses up the operand chain, ignoring
-/// certain nodes.
-static bool findNonImmUse(SDNode *Use, SDNode* Def, SDNode *ImmedUse,
- SDNode *Root, SmallPtrSetImpl<SDNode*> &Visited,
+/// findNonImmUse - Return true if "Def" is a predecessor of "Root" via a path
+/// beyond "ImmedUse". We may ignore chains as they are checked separately.
+static bool findNonImmUse(SDNode *Root, SDNode *Def, SDNode *ImmedUse,
bool IgnoreChains) {
- // The NodeID's are given uniques ID's where a node ID is guaranteed to be
- // greater than all of its (recursive) operands. If we scan to a point where
- // 'use' is smaller than the node we're scanning for, then we know we will
- // never find it.
- //
- // The Use may be -1 (unassigned) if it is a newly allocated node. This can
- // happen because we scan down to newly selected nodes in the case of glue
- // uses.
- std::vector<SDNode *> WorkList;
- WorkList.push_back(Use);
-
- while (!WorkList.empty()) {
- Use = WorkList.back();
- WorkList.pop_back();
- if (Use->getNodeId() < Def->getNodeId() && Use->getNodeId() != -1)
- continue;
+ SmallPtrSet<const SDNode *, 16> Visited;
+ SmallVector<const SDNode *, 16> WorkList;
+ // Only check if we have non-immediate uses of Def.
+ if (ImmedUse->isOnlyUserOf(Def))
+ return false;
- // Don't revisit nodes if we already scanned it and didn't fail, we know we
- // won't fail if we scan it again.
- if (!Visited.insert(Use).second)
+ // We don't care about paths to Def that go through ImmedUse so mark it
+ // visited and mark non-def operands as used.
+ Visited.insert(ImmedUse);
+ for (const SDValue &Op : ImmedUse->op_values()) {
+ SDNode *N = Op.getNode();
+ // Ignore chain deps (they are validated by
+ // HandleMergeInputChains) and immediate uses
+ if ((Op.getValueType() == MVT::Other && IgnoreChains) || N == Def)
continue;
+ if (!Visited.insert(N).second)
+ continue;
+ WorkList.push_back(N);
+ }
- for (const SDValue &Op : Use->op_values()) {
- // Ignore chain uses, they are validated by HandleMergeInputChains.
- if (Op.getValueType() == MVT::Other && IgnoreChains)
- continue;
-
+ // Initialize worklist to operands of Root.
+ if (Root != ImmedUse) {
+ for (const SDValue &Op : Root->op_values()) {
SDNode *N = Op.getNode();
- if (N == Def) {
- if (Use == ImmedUse || Use == Root)
- continue; // We are not looking for immediate use.
- assert(N != Root);
- return true;
- }
-
- // Traverse up the operand chain.
+ // Ignore chains (they are validated by HandleMergeInputChains)
+ if ((Op.getValueType() == MVT::Other && IgnoreChains) || N == Def)
+ continue;
+ if (!Visited.insert(N).second)
+ continue;
WorkList.push_back(N);
}
}
- return false;
+
+ return SDNode::hasPredecessorHelper(Def, Visited, WorkList, 0, true);
}
/// IsProfitableToFold - Returns true if it's profitable to fold the specific
@@ -2251,13 +2322,12 @@ bool SelectionDAGISel::IsLegalToFold(SDValue N, SDNode *U, SDNode *Root,
// If our query node has a glue result with a use, we've walked up it. If
// the user (which has already been selected) has a chain or indirectly uses
- // the chain, our WalkChainUsers predicate will not consider it. Because of
+ // the chain, HandleMergeInputChains will not consider it. Because of
// this, we cannot ignore chains in this predicate.
IgnoreChains = false;
}
- SmallPtrSet<SDNode*, 16> Visited;
- return !findNonImmUse(Root, N.getNode(), U, Root, Visited, IgnoreChains);
+ return !findNonImmUse(Root, N.getNode(), U, IgnoreChains);
}
void SelectionDAGISel::Select_INLINEASM(SDNode *N) {
@@ -2360,7 +2430,8 @@ void SelectionDAGISel::UpdateChains(
std::replace(ChainNodesMatched.begin(), ChainNodesMatched.end(), N,
static_cast<SDNode *>(nullptr));
});
- CurDAG->ReplaceAllUsesOfValueWith(ChainVal, InputChain);
+ if (ChainNode->getOpcode() != ISD::TokenFactor)
+ ReplaceUses(ChainVal, InputChain);
// If the node became dead and we haven't already seen it, delete it.
if (ChainNode != NodeToMatch && ChainNode->use_empty() &&
@@ -2375,143 +2446,6 @@ void SelectionDAGISel::UpdateChains(
DEBUG(dbgs() << "ISEL: Match complete!\n");
}
-enum ChainResult {
- CR_Simple,
- CR_InducesCycle,
- CR_LeadsToInteriorNode
-};
-
-/// WalkChainUsers - Walk down the users of the specified chained node that is
-/// part of the pattern we're matching, looking at all of the users we find.
-/// This determines whether something is an interior node, whether we have a
-/// non-pattern node in between two pattern nodes (which prevent folding because
-/// it would induce a cycle) and whether we have a TokenFactor node sandwiched
-/// between pattern nodes (in which case the TF becomes part of the pattern).
-///
-/// The walk we do here is guaranteed to be small because we quickly get down to
-/// already selected nodes "below" us.
-static ChainResult
-WalkChainUsers(const SDNode *ChainedNode,
- SmallVectorImpl<SDNode *> &ChainedNodesInPattern,
- DenseMap<const SDNode *, ChainResult> &TokenFactorResult,
- SmallVectorImpl<SDNode *> &InteriorChainedNodes) {
- ChainResult Result = CR_Simple;
-
- for (SDNode::use_iterator UI = ChainedNode->use_begin(),
- E = ChainedNode->use_end(); UI != E; ++UI) {
- // Make sure the use is of the chain, not some other value we produce.
- if (UI.getUse().getValueType() != MVT::Other) continue;
-
- SDNode *User = *UI;
-
- if (User->getOpcode() == ISD::HANDLENODE) // Root of the graph.
- continue;
-
- // If we see an already-selected machine node, then we've gone beyond the
- // pattern that we're selecting down into the already selected chunk of the
- // DAG.
- unsigned UserOpcode = User->getOpcode();
- if (User->isMachineOpcode() ||
- UserOpcode == ISD::CopyToReg ||
- UserOpcode == ISD::CopyFromReg ||
- UserOpcode == ISD::INLINEASM ||
- UserOpcode == ISD::EH_LABEL ||
- UserOpcode == ISD::LIFETIME_START ||
- UserOpcode == ISD::LIFETIME_END) {
- // If their node ID got reset to -1 then they've already been selected.
- // Treat them like a MachineOpcode.
- if (User->getNodeId() == -1)
- continue;
- }
-
- // If we have a TokenFactor, we handle it specially.
- if (User->getOpcode() != ISD::TokenFactor) {
- // If the node isn't a token factor and isn't part of our pattern, then it
- // must be a random chained node in between two nodes we're selecting.
- // This happens when we have something like:
- // x = load ptr
- // call
- // y = x+4
- // store y -> ptr
- // Because we structurally match the load/store as a read/modify/write,
- // but the call is chained between them. We cannot fold in this case
- // because it would induce a cycle in the graph.
- if (!std::count(ChainedNodesInPattern.begin(),
- ChainedNodesInPattern.end(), User))
- return CR_InducesCycle;
-
- // Otherwise we found a node that is part of our pattern. For example in:
- // x = load ptr
- // y = x+4
- // store y -> ptr
- // This would happen when we're scanning down from the load and see the
- // store as a user. Record that there is a use of ChainedNode that is
- // part of the pattern and keep scanning uses.
- Result = CR_LeadsToInteriorNode;
- InteriorChainedNodes.push_back(User);
- continue;
- }
-
- // If we found a TokenFactor, there are two cases to consider: first if the
- // TokenFactor is just hanging "below" the pattern we're matching (i.e. no
- // uses of the TF are in our pattern) we just want to ignore it. Second,
- // the TokenFactor can be sandwiched in between two chained nodes, like so:
- // [Load chain]
- // ^
- // |
- // [Load]
- // ^ ^
- // | \ DAG's like cheese
- // / \ do you?
- // / |
- // [TokenFactor] [Op]
- // ^ ^
- // | |
- // \ /
- // \ /
- // [Store]
- //
- // In this case, the TokenFactor becomes part of our match and we rewrite it
- // as a new TokenFactor.
- //
- // To distinguish these two cases, do a recursive walk down the uses.
- auto MemoizeResult = TokenFactorResult.find(User);
- bool Visited = MemoizeResult != TokenFactorResult.end();
- // Recursively walk chain users only if the result is not memoized.
- if (!Visited) {
- auto Res = WalkChainUsers(User, ChainedNodesInPattern, TokenFactorResult,
- InteriorChainedNodes);
- MemoizeResult = TokenFactorResult.insert(std::make_pair(User, Res)).first;
- }
- switch (MemoizeResult->second) {
- case CR_Simple:
- // If the uses of the TokenFactor are just already-selected nodes, ignore
- // it, it is "below" our pattern.
- continue;
- case CR_InducesCycle:
- // If the uses of the TokenFactor lead to nodes that are not part of our
- // pattern that are not selected, folding would turn this into a cycle,
- // bail out now.
- return CR_InducesCycle;
- case CR_LeadsToInteriorNode:
- break; // Otherwise, keep processing.
- }
-
- // Okay, we know we're in the interesting interior case. The TokenFactor
- // is now going to be considered part of the pattern so that we rewrite its
- // uses (it may have uses that are not part of the pattern) with the
- // ultimate chain result of the generated code. We will also add its chain
- // inputs as inputs to the ultimate TokenFactor we create.
- Result = CR_LeadsToInteriorNode;
- if (!Visited) {
- ChainedNodesInPattern.push_back(User);
- InteriorChainedNodes.push_back(User);
- }
- }
-
- return Result;
-}
-
/// HandleMergeInputChains - This implements the OPC_EmitMergeInputChains
/// operation for when the pattern matched at least one node with a chains. The
/// input vector contains a list of all of the chained nodes that we match. We
@@ -2521,47 +2455,56 @@ WalkChainUsers(const SDNode *ChainedNode,
static SDValue
HandleMergeInputChains(SmallVectorImpl<SDNode*> &ChainNodesMatched,
SelectionDAG *CurDAG) {
- // Used for memoization. Without it WalkChainUsers could take exponential
- // time to run.
- DenseMap<const SDNode *, ChainResult> TokenFactorResult;
- // Walk all of the chained nodes we've matched, recursively scanning down the
- // users of the chain result. This adds any TokenFactor nodes that are caught
- // in between chained nodes to the chained and interior nodes list.
- SmallVector<SDNode*, 3> InteriorChainedNodes;
- for (unsigned i = 0, e = ChainNodesMatched.size(); i != e; ++i) {
- if (WalkChainUsers(ChainNodesMatched[i], ChainNodesMatched,
- TokenFactorResult,
- InteriorChainedNodes) == CR_InducesCycle)
- return SDValue(); // Would induce a cycle.
- }
- // Okay, we have walked all the matched nodes and collected TokenFactor nodes
- // that we are interested in. Form our input TokenFactor node.
+ SmallPtrSet<const SDNode *, 16> Visited;
+ SmallVector<const SDNode *, 8> Worklist;
SmallVector<SDValue, 3> InputChains;
- for (unsigned i = 0, e = ChainNodesMatched.size(); i != e; ++i) {
- // Add the input chain of this node to the InputChains list (which will be
- // the operands of the generated TokenFactor) if it's not an interior node.
- SDNode *N = ChainNodesMatched[i];
- if (N->getOpcode() != ISD::TokenFactor) {
- if (std::count(InteriorChainedNodes.begin(),InteriorChainedNodes.end(),N))
- continue;
+ unsigned int Max = 8192;
- // Otherwise, add the input chain.
- SDValue InChain = ChainNodesMatched[i]->getOperand(0);
- assert(InChain.getValueType() == MVT::Other && "Not a chain");
- InputChains.push_back(InChain);
- continue;
- }
+ // Quick exit on trivial merge.
+ if (ChainNodesMatched.size() == 1)
+ return ChainNodesMatched[0]->getOperand(0);
- // If we have a token factor, we want to add all inputs of the token factor
- // that are not part of the pattern we're matching.
- for (const SDValue &Op : N->op_values()) {
- if (!std::count(ChainNodesMatched.begin(), ChainNodesMatched.end(),
- Op.getNode()))
- InputChains.push_back(Op);
- }
+ // Add chains that aren't already added (internal). Peek through
+ // token factors.
+ std::function<void(const SDValue)> AddChains = [&](const SDValue V) {
+ if (V.getValueType() != MVT::Other)
+ return;
+ if (V->getOpcode() == ISD::EntryToken)
+ return;
+ if (!Visited.insert(V.getNode()).second)
+ return;
+ if (V->getOpcode() == ISD::TokenFactor) {
+ for (const SDValue &Op : V->op_values())
+ AddChains(Op);
+ } else
+ InputChains.push_back(V);
+ };
+
+ for (auto *N : ChainNodesMatched) {
+ Worklist.push_back(N);
+ Visited.insert(N);
}
+ while (!Worklist.empty())
+ AddChains(Worklist.pop_back_val()->getOperand(0));
+
+ // Skip the search if there are no chain dependencies.
+ if (InputChains.size() == 0)
+ return CurDAG->getEntryNode();
+
+ // If one of these chains is a successor of input, we must have a
+ // node that is both the predecessor and successor of the
+ // to-be-merged nodes. Fail.
+ Visited.clear();
+ for (SDValue V : InputChains)
+ Worklist.push_back(V.getNode());
+
+ for (auto *N : ChainNodesMatched)
+ if (SDNode::hasPredecessorHelper(N, Visited, Worklist, Max, true))
+ return SDValue();
+
+ // Return merged chain.
if (InputChains.size() == 1)
return InputChains[0];
return CurDAG->getNode(ISD::TokenFactor, SDLoc(ChainNodesMatched[0]),
@@ -2606,8 +2549,8 @@ MorphNode(SDNode *Node, unsigned TargetOpc, SDVTList VTList,
// Move the glue if needed.
if ((EmitNodeInfo & OPFL_GlueOutput) && OldGlueResultNo != -1 &&
(unsigned)OldGlueResultNo != ResNumResults-1)
- CurDAG->ReplaceAllUsesOfValueWith(SDValue(Node, OldGlueResultNo),
- SDValue(Res, ResNumResults-1));
+ ReplaceUses(SDValue(Node, OldGlueResultNo),
+ SDValue(Res, ResNumResults - 1));
if ((EmitNodeInfo & OPFL_GlueOutput) != 0)
--ResNumResults;
@@ -2615,14 +2558,15 @@ MorphNode(SDNode *Node, unsigned TargetOpc, SDVTList VTList,
// Move the chain reference if needed.
if ((EmitNodeInfo & OPFL_Chain) && OldChainResultNo != -1 &&
(unsigned)OldChainResultNo != ResNumResults-1)
- CurDAG->ReplaceAllUsesOfValueWith(SDValue(Node, OldChainResultNo),
- SDValue(Res, ResNumResults-1));
+ ReplaceUses(SDValue(Node, OldChainResultNo),
+ SDValue(Res, ResNumResults - 1));
// Otherwise, no replacement happened because the node already exists. Replace
// Uses of the old node with the new one.
if (Res != Node) {
- CurDAG->ReplaceAllUsesWith(Node, Res);
- CurDAG->RemoveDeadNode(Node);
+ ReplaceNode(Node, Res);
+ } else {
+ EnforceNodeIdInvariant(Res);
}
return Res;
@@ -2939,8 +2883,7 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
return;
case ISD::AssertSext:
case ISD::AssertZext:
- CurDAG->ReplaceAllUsesOfValueWith(SDValue(NodeToMatch, 0),
- NodeToMatch->getOperand(0));
+ ReplaceUses(SDValue(NodeToMatch, 0), NodeToMatch->getOperand(0));
CurDAG->RemoveDeadNode(NodeToMatch);
return;
case ISD::INLINEASM:
@@ -3702,7 +3645,7 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
NodeToMatch->getValueType(i).getSizeInBits() ==
Res.getValueSizeInBits()) &&
"invalid replacement");
- CurDAG->ReplaceAllUsesOfValueWith(SDValue(NodeToMatch, i), Res);
+ ReplaceUses(SDValue(NodeToMatch, i), Res);
}
// Update chain uses.
@@ -3715,8 +3658,8 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
if (NodeToMatch->getValueType(NodeToMatch->getNumValues() - 1) ==
MVT::Glue &&
InputGlue.getNode())
- CurDAG->ReplaceAllUsesOfValueWith(
- SDValue(NodeToMatch, NodeToMatch->getNumValues() - 1), InputGlue);
+ ReplaceUses(SDValue(NodeToMatch, NodeToMatch->getNumValues() - 1),
+ InputGlue);
assert(NodeToMatch->use_empty() &&
"Didn't replace all uses of the node?");
diff --git a/lib/Target/AMDGPU/AMDGPUISelDAGToDAG.cpp b/lib/Target/AMDGPU/AMDGPUISelDAGToDAG.cpp
index f4776adb069..be5345e422d 100644
--- a/lib/Target/AMDGPU/AMDGPUISelDAGToDAG.cpp
+++ b/lib/Target/AMDGPU/AMDGPUISelDAGToDAG.cpp
@@ -759,12 +759,11 @@ void AMDGPUDAGToDAGISel::SelectADD_SUB_I64(SDNode *N) {
if (ProduceCarry) {
// Replace the carry-use
- CurDAG->ReplaceAllUsesOfValueWith(SDValue(N, 1), SDValue(AddHi, 1));
+ ReplaceUses(SDValue(N, 1), SDValue(AddHi, 1));
}
// Replace the remaining uses.
- CurDAG->ReplaceAllUsesWith(N, RegSequence);
- CurDAG->RemoveDeadNode(N);
+ ReplaceNode(N, RegSequence);
}
void AMDGPUDAGToDAGISel::SelectUADDO_USUBO(SDNode *N) {
diff --git a/lib/Target/ARM/ARMISelDAGToDAG.cpp b/lib/Target/ARM/ARMISelDAGToDAG.cpp
index 8d32510e200..0f504718f28 100644
--- a/lib/Target/ARM/ARMISelDAGToDAG.cpp
+++ b/lib/Target/ARM/ARMISelDAGToDAG.cpp
@@ -498,7 +498,7 @@ bool ARMDAGToDAGISel::canExtractShiftFromMul(const SDValue &N,
void ARMDAGToDAGISel::replaceDAGValue(const SDValue &N, SDValue M) {
CurDAG->RepositionNode(N.getNode()->getIterator(), M.getNode());
- CurDAG->ReplaceAllUsesWith(N, M);
+ ReplaceUses(N, M);
}
bool ARMDAGToDAGISel::SelectImmShifterOperand(SDValue N,
diff --git a/lib/Target/Hexagon/HexagonISelDAGToDAG.cpp b/lib/Target/Hexagon/HexagonISelDAGToDAG.cpp
index a6ac4e3df74..3721856ff45 100644
--- a/lib/Target/Hexagon/HexagonISelDAGToDAG.cpp
+++ b/lib/Target/Hexagon/HexagonISelDAGToDAG.cpp
@@ -777,7 +777,7 @@ void HexagonDAGToDAGISel::SelectBitcast(SDNode *N) {
return;
}
- CurDAG->ReplaceAllUsesOfValueWith(SDValue(N,0), N->getOperand(0));
+ ReplaceUses(SDValue(N, 0), N->getOperand(0));
CurDAG->RemoveDeadNode(N);
}
@@ -2182,4 +2182,3 @@ void HexagonDAGToDAGISel::rebalanceAddressTrees() {
RootHeights.clear();
RootWeights.clear();
}
-
diff --git a/lib/Target/Hexagon/HexagonISelDAGToDAGHVX.cpp b/lib/Target/Hexagon/HexagonISelDAGToDAGHVX.cpp
index f08c5054065..0608f06ef7e 100644
--- a/lib/Target/Hexagon/HexagonISelDAGToDAGHVX.cpp
+++ b/lib/Target/Hexagon/HexagonISelDAGToDAGHVX.cpp
@@ -1914,7 +1914,6 @@ void HvxSelector::selectShuffle(SDNode *N) {
// If the mask is all -1's, generate "undef".
if (!UseLeft && !UseRight) {
ISel.ReplaceNode(N, ISel.selectUndef(SDLoc(SN), ResTy).getNode());
- DAG.RemoveDeadNode(N);
return;
}
@@ -1970,7 +1969,6 @@ void HvxSelector::selectRor(SDNode *N) {
NewN = DAG.getMachineNode(Hexagon::V6_vror, dl, Ty, {VecV, RotV});
ISel.ReplaceNode(N, NewN);
- DAG.RemoveDeadNode(N);
}
void HexagonDAGToDAGISel::SelectHvxShuffle(SDNode *N) {
@@ -2017,8 +2015,7 @@ void HexagonDAGToDAGISel::SelectV65GatherPred(SDNode *N) {
MemOp[0] = cast<MemIntrinsicSDNode>(N)->getMemOperand();
cast<MachineSDNode>(Result)->setMemRefs(MemOp, MemOp + 1);
- ReplaceUses(N, Result);
- CurDAG->RemoveDeadNode(N);
+ ReplaceNode(N, Result);
}
void HexagonDAGToDAGISel::SelectV65Gather(SDNode *N) {
@@ -2056,8 +2053,7 @@ void HexagonDAGToDAGISel::SelectV65Gather(SDNode *N) {
MemOp[0] = cast<MemIntrinsicSDNode>(N)->getMemOperand();
cast<MachineSDNode>(Result)->setMemRefs(MemOp, MemOp + 1);
- ReplaceUses(N, Result);
- CurDAG->RemoveDeadNode(N);
+ ReplaceNode(N, Result);
}
void HexagonDAGToDAGISel::SelectHVXDualOutput(SDNode *N) {
@@ -2100,5 +2096,3 @@ void HexagonDAGToDAGISel::SelectHVXDualOutput(SDNode *N) {
ReplaceUses(SDValue(N, 1), SDValue(Result, 1));
CurDAG->RemoveDeadNode(N);
}
-
-
diff --git a/lib/Target/SystemZ/SystemZISelDAGToDAG.cpp b/lib/Target/SystemZ/SystemZISelDAGToDAG.cpp
index ce6f3d37f5c..fe59d820c88 100644
--- a/lib/Target/SystemZ/SystemZISelDAGToDAG.cpp
+++ b/lib/Target/SystemZ/SystemZISelDAGToDAG.cpp
@@ -589,10 +589,16 @@ bool SystemZDAGToDAGISel::selectAddress(SDValue Addr,
// The selection DAG must no longer depend on their uniqueness when this
// function is used.
static void insertDAGNode(SelectionDAG *DAG, SDNode *Pos, SDValue N) {
- if (N.getNode()->getNodeId() == -1 ||
- N.getNode()->getNodeId() > Pos->getNodeId()) {
+ if (N->getNodeId() == -1 ||
+ (SelectionDAGISel::getUninvalidatedNodeId(N.getNode()) >
+ SelectionDAGISel::getUninvalidatedNodeId(Pos))) {
DAG->RepositionNode(Pos->getIterator(), N.getNode());
- N.getNode()->setNodeId(Pos->getNodeId());
+ // Mark Node as invalid for pruning as after this it may be a successor to a
+ // selected node but otherwise be in the same position of Pos.
+ // Conservatively mark it with the same -abs(Id) to assure node id
+ // invariant is preserved.
+ N->setNodeId(Pos->getNodeId());
+ SelectionDAGISel::InvalidateNodeId(N.getNode());
}
}
@@ -1022,8 +1028,7 @@ bool SystemZDAGToDAGISel::tryRISBGZero(SDNode *N) {
};
SDValue New = convertTo(
DL, VT, SDValue(CurDAG->getMachineNode(Opcode, DL, OpcodeVT, Ops), 0));
- ReplaceUses(N, New.getNode());
- CurDAG->RemoveDeadNode(N);
+ ReplaceNode(N, New.getNode());
return true;
}
@@ -1114,8 +1119,7 @@ void SystemZDAGToDAGISel::splitLargeImmediate(unsigned Opcode, SDNode *Node,
SDValue Lower = CurDAG->getConstant(LowerVal, DL, VT);
SDValue Or = CurDAG->getNode(Opcode, DL, VT, Upper, Lower);
- ReplaceUses(Node, Or.getNode());
- CurDAG->RemoveDeadNode(Node);
+ ReplaceNode(Node, Or.getNode());
SelectCode(Or.getNode());
}
diff --git a/lib/Target/X86/X86ISelDAGToDAG.cpp b/lib/Target/X86/X86ISelDAGToDAG.cpp
index d79fd0ca4da..ee2d221e31c 100644
--- a/lib/Target/X86/X86ISelDAGToDAG.cpp
+++ b/lib/Target/X86/X86ISelDAGToDAG.cpp
@@ -988,10 +988,16 @@ bool X86DAGToDAGISel::matchAdd(SDValue N, X86ISelAddressMode &AM,
// IDs! The selection DAG must no longer depend on their uniqueness when this
// is used.
static void insertDAGNode(SelectionDAG &DAG, SDValue Pos, SDValue N) {
- if (N.getNode()->getNodeId() == -1 ||
- N.getNode()->getNodeId() > Pos.getNode()->getNodeId()) {
- DAG.RepositionNode(Pos.getNode()->getIterator(), N.getNode());
- N.getNode()->setNodeId(Pos.getNode()->getNodeId());
+ if (N->getNodeId() == -1 ||
+ (SelectionDAGISel::getUninvalidatedNodeId(N.getNode()) >
+ SelectionDAGISel::getUninvalidatedNodeId(Pos.getNode()))) {
+ DAG.RepositionNode(Pos->getIterator(), N.getNode());
+ // Mark Node as invalid for pruning as after this it may be a successor to a
+ // selected node but otherwise be in the same position of Pos.
+ // Conservatively mark it with the same -abs(Id) to assure node id
+ // invariant is preserved.
+ N->setNodeId(Pos->getNodeId());
+ SelectionDAGISel::InvalidateNodeId(N.getNode());
}
}
@@ -2092,50 +2098,84 @@ static bool isFusableLoadOpStorePattern(StoreSDNode *StoreNode,
LoadNode->getOffset() != StoreNode->getOffset())
return false;
- // Check if the chain is produced by the load or is a TokenFactor with
- // the load output chain as an operand. Return InputChain by reference.
+ bool FoundLoad = false;
+ SmallVector<SDValue, 4> ChainOps;
+ SmallVector<const SDNode *, 4> LoopWorklist;
+ SmallPtrSet<const SDNode *, 16> Visited;
+ const unsigned int Max = 1024;
+
+ // Visualization of Load-Op-Store fusion:
+ // -------------------------
+ // Legend:
+ // *-lines = Chain operand dependencies.
+ // |-lines = Normal operand dependencies.
+ // Dependencies flow down and right. n-suffix references multiple nodes.
+ //
+ // C Xn C
+ // * * *
+ // * * *
+ // Xn A-LD Yn TF Yn
+ // * * \ | * |
+ // * * \ | * |
+ // * * \ | => A--LD_OP_ST
+ // * * \| \
+ // TF OP \
+ // * | \ Zn
+ // * | \
+ // A-ST Zn
+ //
+
+ // This merge induced dependences from: #1: Xn -> LD, OP, Zn
+ // #2: Yn -> LD
+ // #3: ST -> Zn
+
+ // Ensure the transform is safe by checking for the dual
+ // dependencies to make sure we do not induce a loop.
+
+ // As LD is a predecessor to both OP and ST we can do this by checking:
+ // a). if LD is a predecessor to a member of Xn or Yn.
+ // b). if a Zn is a predecessor to ST.
+
+ // However, (b) can only occur through being a chain predecessor to
+ // ST, which is the same as Zn being a member or predecessor of Xn,
+ // which is a subset of LD being a predecessor of Xn. So it's
+ // subsumed by check (a).
+
SDValue Chain = StoreNode->getChain();
- bool ChainCheck = false;
+ // Gather X elements in ChainOps.
if (Chain == Load.getValue(1)) {
- ChainCheck = true;
- InputChain = LoadNode->getChain();
+ FoundLoad = true;
+ ChainOps.push_back(Load.getOperand(0));
} else if (Chain.getOpcode() == ISD::TokenFactor) {
- SmallVector<SDValue, 4> ChainOps;
for (unsigned i = 0, e = Chain.getNumOperands(); i != e; ++i) {
SDValue Op = Chain.getOperand(i);
if (Op == Load.getValue(1)) {
- ChainCheck = true;
+ FoundLoad = true;
// Drop Load, but keep its chain. No cycle check necessary.
ChainOps.push_back(Load.getOperand(0));
continue;
}
-
- // Make sure using Op as part of the chain would not cause a cycle here.
- // In theory, we could check whether the chain node is a predecessor of
- // the load. But that can be very expensive. Instead visit the uses and
- // make sure they all have smaller node id than the load.
- int LoadId = LoadNode->getNodeId();
- for (SDNode::use_iterator UI = Op.getNode()->use_begin(),
- UE = UI->use_end(); UI != UE; ++UI) {
- if (UI.getUse().getResNo() != 0)
- continue;
- if (UI->getNodeId() > LoadId)
- return false;
- }
-
+ LoopWorklist.push_back(Op.getNode());
ChainOps.push_back(Op);
}
-
- if (ChainCheck)
- // Make a new TokenFactor with all the other input chains except
- // for the load.
- InputChain = CurDAG->getNode(ISD::TokenFactor, SDLoc(Chain),
- MVT::Other, ChainOps);
}
- if (!ChainCheck)
+
+ if (!FoundLoad)
+ return false;
+
+ // Worklist is currently Xn. Add Yn to worklist.
+ for (SDValue Op : StoredVal->ops())
+ if (Op.getNode() != LoadNode)
+ LoopWorklist.push_back(Op.getNode());
+
+ // Check (a) if Load is a predecessor to Xn + Yn
+ if (SDNode::hasPredecessorHelper(Load.getNode(), Visited, LoopWorklist, Max,
+ true))
return false;
+ InputChain =
+ CurDAG->getNode(ISD::TokenFactor, SDLoc(Chain), MVT::Other, ChainOps);
return true;
}
@@ -2335,6 +2375,8 @@ bool X86DAGToDAGISel::foldLoadStoreIntoMemOperand(SDNode *Node) {
MemOp[1] = LoadNode->getMemOperand();
Result->setMemRefs(MemOp, MemOp + 2);
+ // Update Load Chain uses as well.
+ ReplaceUses(SDValue(LoadNode, 1), SDValue(Result, 1));
ReplaceUses(SDValue(StoreNode, 0), SDValue(Result, 1));
ReplaceUses(SDValue(StoredVal.getNode(), 1), SDValue(Result, 0));
CurDAG->RemoveDeadNode(Node);
@@ -2946,12 +2988,7 @@ void X86DAGToDAGISel::Select(SDNode *Node) {
return;
}
- case X86ISD::CMP:
- case X86ISD::SUB: {
- // Sometimes a SUB is used to perform comparison.
- if (Opcode == X86ISD::SUB && Node->hasAnyUseOfValue(0))
- // This node is not a CMP.
- break;
+ case X86ISD::CMP: {
SDValue N0 = Node->getOperand(0);
SDValue N1 = Node->getOperand(1);
@@ -2971,95 +3008,52 @@ void X86DAGToDAGISel::Select(SDNode *Node) {
if (!C) break;
uint64_t Mask = C->getZExtValue();
- // For example, convert "testl %eax, $8" to "testb %al, $8"
+ MVT VT;
+ int SubRegOp;
+ unsigned Op;
+
if (isUInt<8>(Mask) &&
(!(Mask & 0x80) || hasNoSignedComparisonUses(Node))) {
- SDValue Imm = CurDAG->getTargetConstant(Mask, dl, MVT::i8);
- SDValue Reg = N0.getOperand(0);
-
- // Extract the l-register.
- SDValue Subreg = CurDAG->getTargetExtractSubreg(X86::sub_8bit, dl,
- MVT::i8, Reg);
-
- // Emit a testb.
- SDNode *NewNode = CurDAG->getMachineNode(X86::TEST8ri, dl, MVT::i32,
- Subreg, Imm);
- // Replace SUB|CMP with TEST, since SUB has two outputs while TEST has
- // one, do not call ReplaceAllUsesWith.
- ReplaceUses(SDValue(Node, (Opcode == X86ISD::SUB ? 1 : 0)),
- SDValue(NewNode, 0));
- CurDAG->RemoveDeadNode(Node);
- return;
+ // For example, convert "testl %eax, $8" to "testb %al, $8"
+ VT = MVT::i8;
+ SubRegOp = X86::sub_8bit;
+ Op = X86::TEST8ri;
+ } else if (OptForMinSize && isUInt<16>(Mask) &&
+ (!(Mask & 0x8000) || hasNoSignedComparisonUses(Node))) {
+ // For example, "testl %eax, $32776" to "testw %ax, $32776".
+ // NOTE: We only want to form TESTW instructions if optimizing for
+ // min size. Otherwise we only save one byte and possibly get a length
+ // changing prefix penalty in the decoders.
+ VT = MVT::i16;
+ SubRegOp = X86::sub_16bit;
+ Op = X86::TEST16ri;
+ } else if (isUInt<32>(Mask) && N0.getValueType() != MVT::i16 &&
+ (!(Mask & 0x80000000) || hasNoSignedComparisonUses(Node))) {
+ // For example, "testq %rax, $268468232" to "testl %eax, $268468232".
+ // NOTE: We only want to run that transform if N0 is 32 or 64 bits.
+ // Otherwize, we find ourselves in a position where we have to do
+ // promotion. If previous passes did not promote the and, we assume
+ // they had a good reason not to and do not promote here.
+ VT = MVT::i32;
+ SubRegOp = X86::sub_32bit;
+ Op = X86::TEST32ri;
+ } else {
+ // No eligible transformation was found.
+ break;
}
- // For example, "testl %eax, $2048" to "testb %ah, $8".
- if (isShiftedUInt<8, 8>(Mask) &&
- (!(Mask & 0x8000) || hasNoSignedComparisonUses(Node))) {
- // Shift the immediate right by 8 bits.
- SDValue ShiftedImm = CurDAG->getTargetConstant(Mask >> 8, dl, MVT::i8);
- SDValue Reg = N0.getOperand(0);
-
- // Extract the h-register.
- SDValue Subreg = CurDAG->getTargetExtractSubreg(X86::sub_8bit_hi, dl,
- MVT::i8, Reg);
-
- // Emit a testb. The EXTRACT_SUBREG becomes a COPY that can only
- // target GR8_NOREX registers, so make sure the register class is
- // forced.
- SDNode *NewNode = CurDAG->getMachineNode(X86::TEST8ri_NOREX, dl,
- MVT::i32, Subreg, ShiftedImm);
- // Replace SUB|CMP with TEST, since SUB has two outputs while TEST has
- // one, do not call ReplaceAllUsesWith.
- ReplaceUses(SDValue(Node, (Opcode == X86ISD::SUB ? 1 : 0)),
- SDValue(NewNode, 0));
- CurDAG->RemoveDeadNode(Node);
- return;
- }
+ SDValue Imm = CurDAG->getTargetConstant(Mask, dl, VT);
+ SDValue Reg = N0.getOperand(0);
- // For example, "testl %eax, $32776" to "testw %ax, $32776".
- // NOTE: We only want to form TESTW instructions if optimizing for
- // min size. Otherwise we only save one byte and possibly get a length
- // changing prefix penalty in the decoders.
- if (OptForMinSize && isUInt<16>(Mask) && N0.getValueType() != MVT::i16 &&
- (!(Mask & 0x8000) || hasNoSignedComparisonUses(Node))) {
- SDValue Imm = CurDAG->getTargetConstant(Mask, dl, MVT::i16);
- SDValue Reg = N0.getOperand(0);
-
- // Extract the 16-bit subregister.
- SDValue Subreg = CurDAG->getTargetExtractSubreg(X86::sub_16bit, dl,
- MVT::i16, Reg);
-
- // Emit a testw.
- SDNode *NewNode = CurDAG->getMachineNode(X86::TEST16ri, dl, MVT::i32,
- Subreg, Imm);
- // Replace SUB|CMP with TEST, since SUB has two outputs while TEST has
- // one, do not call ReplaceAllUsesWith.
- ReplaceUses(SDValue(Node, (Opcode == X86ISD::SUB ? 1 : 0)),
- SDValue(NewNode, 0));
- CurDAG->RemoveDeadNode(Node);
- return;
- }
+ // Extract the subregister if necessary.
+ if (N0.getValueType() != VT)
+ Reg = CurDAG->getTargetExtractSubreg(SubRegOp, dl, VT, Reg);
- // For example, "testq %rax, $268468232" to "testl %eax, $268468232".
- if (isUInt<32>(Mask) && N0.getValueType() == MVT::i64 &&
- (!(Mask & 0x80000000) || hasNoSignedComparisonUses(Node))) {
- SDValue Imm = CurDAG->getTargetConstant(Mask, dl, MVT::i32);
- SDValue Reg = N0.getOperand(0);
-
- // Extract the 32-bit subregister.
- SDValue Subreg = CurDAG->getTargetExtractSubreg(X86::sub_32bit, dl,
- MVT::i32, Reg);
-
- // Emit a testl.
- SDNode *NewNode = CurDAG->getMachineNode(X86::TEST32ri, dl, MVT::i32,
- Subreg, Imm);
- // Replace SUB|CMP with TEST, since SUB has two outputs while TEST has
- // one, do not call ReplaceAllUsesWith.
- ReplaceUses(SDValue(Node, (Opcode == X86ISD::SUB ? 1 : 0)),
- SDValue(NewNode, 0));
- CurDAG->RemoveDeadNode(Node);
- return;
- }
+ // Emit a testl or testw.
+ SDNode *NewNode = CurDAG->getMachineNode(Op, dl, MVT::i32, Reg, Imm);
+ // Replace CMP with TEST.
+ ReplaceNode(Node, NewNode);
+ return;
}
break;
}
diff --git a/lib/Target/X86/X86ISelLowering.cpp b/lib/Target/X86/X86ISelLowering.cpp
index c1ddb771e2f..86e71cba87b 100644
--- a/lib/Target/X86/X86ISelLowering.cpp
+++ b/lib/Target/X86/X86ISelLowering.cpp
@@ -8131,6 +8131,32 @@ X86TargetLowering::LowerBUILD_VECTOR(SDValue Op, SelectionDAG &DAG) const {
return LD;
}
+ // If this is a splat of pairs of 32-bit elements, we can use a narrower
+ // build_vector and broadcast it.
+ // TODO: We could probably generalize this more.
+ if (Subtarget.hasAVX2() && EVTBits == 32 && Values.size() == 2) {
+ SDValue Ops[4] = { Op.getOperand(0), Op.getOperand(1),
+ DAG.getUNDEF(ExtVT), DAG.getUNDEF(ExtVT) };
+ auto CanSplat = [](SDValue Op, unsigned NumElems, ArrayRef<SDValue> Ops) {
+ // Make sure all the even/odd operands match.
+ for (unsigned i = 2; i != NumElems; ++i)
+ if (Ops[i % 2] != Op.getOperand(i))
+ return false;
+ return true;
+ };
+ if (CanSplat(Op, NumElems, Ops)) {
+ MVT WideEltVT = VT.isFloatingPoint() ? MVT::f64 : MVT::i64;
+ MVT NarrowVT = MVT::getVectorVT(ExtVT, 4);
+ // Create a new build vector and cast to v2i64/v2f64.
+ SDValue NewBV = DAG.getBitcast(MVT::getVectorVT(WideEltVT, 2),
+ DAG.getBuildVector(NarrowVT, dl, Ops));
+ // Broadcast from v2i64/v2f64 and cast to final VT.
+ MVT BcastVT = MVT::getVectorVT(WideEltVT, NumElems/2);
+ return DAG.getBitcast(VT, DAG.getNode(X86ISD::VBROADCAST, dl, BcastVT,
+ NewBV));
+ }
+ }
+
// For AVX-length vectors, build the individual 128-bit pieces and use
// shuffles to put them in place.
if (VT.is256BitVector() || VT.is512BitVector()) {
diff --git a/lib/Target/X86/X86InstrArithmetic.td b/lib/Target/X86/X86InstrArithmetic.td
index 98cc8fb7439..3d5de637da2 100644
--- a/lib/Target/X86/X86InstrArithmetic.td
+++ b/lib/Target/X86/X86InstrArithmetic.td
@@ -1257,14 +1257,6 @@ let isCompare = 1 in {
def TEST32mi : BinOpMI_F<0xF6, "test", Xi32, X86testpat, MRM0m>;
let Predicates = [In64BitMode] in
def TEST64mi32 : BinOpMI_F<0xF6, "test", Xi64, X86testpat, MRM0m>;
-
- // When testing the result of EXTRACT_SUBREG sub_8bit_hi, make sure the
- // register class is constrained to GR8_NOREX. This pseudo is explicitly
- // marked side-effect free, since it doesn't have an isel pattern like
- // other test instructions.
- let isPseudo = 1, hasSideEffects = 0 in
- def TEST8ri_NOREX : I<0, Pseudo, (outs), (ins GR8_NOREX:$src, i8imm:$mask),
- "", [], IIC_BIN_NONMEM>, Sched<[WriteALU]>;
} // Defs = [EFLAGS]
def TEST8i8 : BinOpAI_F<0xA8, "test", Xi8 , AL,
diff --git a/lib/Target/X86/X86InstrInfo.cpp b/lib/Target/X86/X86InstrInfo.cpp
index 11ada51a870..84a9200a0ef 100644
--- a/lib/Target/X86/X86InstrInfo.cpp
+++ b/lib/Target/X86/X86InstrInfo.cpp
@@ -7854,9 +7854,6 @@ bool X86InstrInfo::expandPostRAPseudo(MachineInstr &MI) const {
case X86::VMOVUPSZ256mr_NOVLX:
return expandNOVLXStore(MIB, &getRegisterInfo(), get(X86::VMOVUPSYmr),
get(X86::VEXTRACTF64x4Zmr), X86::sub_ymm);
- case X86::TEST8ri_NOREX:
- MI.setDesc(get(X86::TEST8ri));
- return true;
case X86::MOV32ri64:
MI.setDesc(get(X86::MOV32ri));
return true;
diff --git a/lib/Target/X86/X86MacroFusion.cpp b/lib/Target/X86/X86MacroFusion.cpp
index 67d95c2233d..4e11397dec4 100644
--- a/lib/Target/X86/X86MacroFusion.cpp
+++ b/lib/Target/X86/X86MacroFusion.cpp
@@ -86,7 +86,6 @@ static bool shouldScheduleAdjacent(const TargetInstrInfo &TII,
case X86::TEST16mr:
case X86::TEST32mr:
case X86::TEST64mr:
- case X86::TEST8ri_NOREX:
case X86::AND16i16:
case X86::AND16ri:
case X86::AND16ri8:
diff --git a/test/CodeGen/SystemZ/pr36164.ll b/test/CodeGen/SystemZ/pr36164.ll
new file mode 100644
index 00000000000..0c850091d31
--- /dev/null
+++ b/test/CodeGen/SystemZ/pr36164.ll
@@ -0,0 +1,113 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc %s -o - -mtriple=s390x-linux-gnu -mcpu=z13 -disable-basicaa | FileCheck %s
+
+; This test checks that we do not a reference to a deleted node.
+
+%0 = type { i32 }
+
+@g_11 = external dso_local unnamed_addr global i1, align 4
+@g_69 = external dso_local global i32, align 4
+@g_73 = external dso_local unnamed_addr global i32, align 4
+@g_832 = external dso_local constant %0, align 4
+@g_938 = external dso_local unnamed_addr global i64, align 8
+
+; Function Attrs: nounwind
+define void @main() local_unnamed_addr #0 {
+; CHECK-LABEL: main:
+; CHECK: # %bb.0:
+; CHECK-NEXT: stmg %r12, %r15, 96(%r15)
+; CHECK-NEXT: .cfi_offset %r12, -64
+; CHECK-NEXT: .cfi_offset %r13, -56
+; CHECK-NEXT: .cfi_offset %r14, -48
+; CHECK-NEXT: .cfi_offset %r15, -40
+; CHECK-NEXT: lhi %r0, 1
+; CHECK-NEXT: larl %r1, g_938
+; CHECK-NEXT: lhi %r2, 2
+; CHECK-NEXT: lhi %r3, 3
+; CHECK-NEXT: lhi %r4, 0
+; CHECK-NEXT: lhi %r5, 4
+; CHECK-NEXT: larl %r14, g_11
+; CHECK-NEXT: .LBB0_1: # =>This Inner Loop Header: Depth=1
+; CHECK-NEXT: strl %r0, g_73
+; CHECK-NEXT: lrl %r13, g_832
+; CHECK-NEXT: lrl %r13, g_832
+; CHECK-NEXT: lrl %r13, g_832
+; CHECK-NEXT: lrl %r13, g_832
+; CHECK-NEXT: lrl %r13, g_832
+; CHECK-NEXT: lrl %r13, g_832
+; CHECK-NEXT: lrl %r13, g_832
+; CHECK-NEXT: lrl %r13, g_832
+; CHECK-NEXT: lrl %r13, g_832
+; CHECK-NEXT: lrl %r13, g_832
+; CHECK-NEXT: lrl %r13, g_832
+; CHECK-NEXT: lrl %r13, g_832
+; CHECK-NEXT: lrl %r13, g_832
+; CHECK-NEXT: lrl %r13, g_832
+; CHECK-NEXT: lrl %r13, g_832
+; CHECK-NEXT: strl %r0, g_69
+; CHECK-NEXT: lrl %r13, g_832
+; CHECK-NEXT: lghi %r13, 24
+; CHECK-NEXT: strl %r2, g_69
+; CHECK-NEXT: ag %r13, 0(%r1)
+; CHECK-NEXT: lrl %r12, g_832
+; CHECK-NEXT: strl %r3, g_69
+; CHECK-NEXT: lrl %r12, g_832
+; CHECK-NEXT: strl %r4, g_69
+; CHECK-NEXT: lrl %r12, g_832
+; CHECK-NEXT: strl %r0, g_69
+; CHECK-NEXT: lrl %r12, g_832
+; CHECK-NEXT: strl %r2, g_69
+; CHECK-NEXT: lrl %r12, g_832
+; CHECK-NEXT: strl %r3, g_69
+; CHECK-NEXT: stgrl %r13, g_938
+; CHECK-NEXT: lrl %r13, g_832
+; CHECK-NEXT: strl %r5, g_69
+; CHECK-NEXT: mvi 0(%r14), 1
+; CHECK-NEXT: j .LBB0_1
+ br label %1
+
+; <label>:1: ; preds = %1, %0
+ store i32 1, i32* @g_73, align 4
+ %2 = load i64, i64* @g_938, align 8
+ store i32 0, i32* @g_69, align 4
+ %3 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ %4 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ %5 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ %6 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ store i32 1, i32* @g_69, align 4
+ %7 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ %8 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ store i32 3, i32* @g_69, align 4
+ %9 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ %10 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ store i32 1, i32* @g_69, align 4
+ %11 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ store i32 2, i32* @g_69, align 4
+ %12 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ store i32 3, i32* @g_69, align 4
+ %13 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ store i32 0, i32* @g_69, align 4
+ %14 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ %15 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ %16 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ %17 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ store i32 1, i32* @g_69, align 4
+ %18 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ store i32 2, i32* @g_69, align 4
+ %19 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ store i32 3, i32* @g_69, align 4
+ %20 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ store i32 0, i32* @g_69, align 4
+ %21 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ store i32 1, i32* @g_69, align 4
+ %22 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ store i32 2, i32* @g_69, align 4
+ %23 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ store i32 3, i32* @g_69, align 4
+ %24 = add i64 %2, 24
+ store i64 %24, i64* @g_938, align 8
+ %25 = load volatile i32, i32* getelementptr inbounds (%0, %0* @g_832, i64 0, i32 0), align 4
+ store i32 4, i32* @g_69, align 4
+ store i1 true, i1* @g_11, align 4
+ br label %1
+}
diff --git a/test/CodeGen/X86/2012-01-16-mfence-nosse-flags.ll b/test/CodeGen/X86/2012-01-16-mfence-nosse-flags.ll
deleted file mode 100644
index a6c34b8fffa..00000000000
--- a/test/CodeGen/X86/2012-01-16-mfence-nosse-flags.ll
+++ /dev/null
@@ -1,33 +0,0 @@
-; RUN: llc < %s -verify-machineinstrs -mtriple=i686-linux -mattr=-sse | FileCheck %s
-; PR11768
-
-@ptr = external global i8*
-
-define void @baz() nounwind ssp {
-entry:
- %0 = load i8*, i8** @ptr, align 4
- %cmp = icmp eq i8* %0, null
- fence seq_cst
- br i1 %cmp, label %if.then, label %if.else
-
-; Make sure the fence comes before the comparison, since it
-; clobbers EFLAGS.
-
-; CHECK: lock orl {{.*}}, (%esp)
-; CHECK-NEXT: testl [[REG:%e[a-z]+]], [[REG]]
-
-if.then: ; preds = %entry
- tail call void bitcast (void (...)* @foo to void ()*)() nounwind
- br label %if.end
-
-if.else: ; preds = %entry
- tail call void bitcast (void (...)* @bar to void ()*)() nounwind
- br label %if.end
-
-if.end: ; preds = %if.else, %if.then
- ret void
-}
-
-declare void @foo(...)
-
-declare void @bar(...)
diff --git a/test/CodeGen/X86/avg.ll b/test/CodeGen/X86/avg.ll
index dd11f6ca293..d2b9984a7fc 100644
--- a/test/CodeGen/X86/avg.ll
+++ b/test/CodeGen/X86/avg.ll
@@ -90,12 +90,12 @@ define void @avg_v16i8(<16 x i8>* %a, <16 x i8>* %b) nounwind {
define void @avg_v32i8(<32 x i8>* %a, <32 x i8>* %b) nounwind {
; SSE2-LABEL: avg_v32i8:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa 16(%rdi), %xmm0
-; SSE2-NEXT: movdqa (%rsi), %xmm1
-; SSE2-NEXT: pavgb (%rdi), %xmm1
-; SSE2-NEXT: pavgb 16(%rsi), %xmm0
-; SSE2-NEXT: movdqu %xmm0, (%rax)
+; SSE2-NEXT: movdqa (%rsi), %xmm0
+; SSE2-NEXT: movdqa 16(%rsi), %xmm1
+; SSE2-NEXT: pavgb (%rdi), %xmm0
+; SSE2-NEXT: pavgb 16(%rdi), %xmm1
; SSE2-NEXT: movdqu %xmm1, (%rax)
+; SSE2-NEXT: movdqu %xmm0, (%rax)
; SSE2-NEXT: retq
;
; AVX1-LABEL: avg_v32i8:
@@ -545,18 +545,18 @@ define void @avg_v48i8(<48 x i8>* %a, <48 x i8>* %b) nounwind {
define void @avg_v64i8(<64 x i8>* %a, <64 x i8>* %b) nounwind {
; SSE2-LABEL: avg_v64i8:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa 32(%rdi), %xmm0
-; SSE2-NEXT: movdqa (%rsi), %xmm1
-; SSE2-NEXT: movdqa 16(%rsi), %xmm2
+; SSE2-NEXT: movdqa (%rsi), %xmm0
+; SSE2-NEXT: movdqa 16(%rsi), %xmm1
+; SSE2-NEXT: movdqa 32(%rsi), %xmm2
; SSE2-NEXT: movdqa 48(%rsi), %xmm3
-; SSE2-NEXT: pavgb (%rdi), %xmm1
-; SSE2-NEXT: pavgb 16(%rdi), %xmm2
-; SSE2-NEXT: pavgb 32(%rsi), %xmm0
+; SSE2-NEXT: pavgb (%rdi), %xmm0
+; SSE2-NEXT: pavgb 16(%rdi), %xmm1
+; SSE2-NEXT: pavgb 32(%rdi), %xmm2
; SSE2-NEXT: pavgb 48(%rdi), %xmm3
; SSE2-NEXT: movdqu %xmm3, (%rax)
-; SSE2-NEXT: movdqu %xmm0, (%rax)
; SSE2-NEXT: movdqu %xmm2, (%rax)
; SSE2-NEXT: movdqu %xmm1, (%rax)
+; SSE2-NEXT: movdqu %xmm0, (%rax)
; SSE2-NEXT: retq
;
; AVX1-LABEL: avg_v64i8:
@@ -582,23 +582,23 @@ define void @avg_v64i8(<64 x i8>* %a, <64 x i8>* %b) nounwind {
;
; AVX2-LABEL: avg_v64i8:
; AVX2: # %bb.0:
-; AVX2-NEXT: vmovdqa 32(%rdi), %ymm0
-; AVX2-NEXT: vmovdqa (%rsi), %ymm1
-; AVX2-NEXT: vpavgb (%rdi), %ymm1, %ymm1
-; AVX2-NEXT: vpavgb 32(%rsi), %ymm0, %ymm0
-; AVX2-NEXT: vmovdqu %ymm0, (%rax)
+; AVX2-NEXT: vmovdqa (%rsi), %ymm0
+; AVX2-NEXT: vmovdqa 32(%rsi), %ymm1
+; AVX2-NEXT: vpavgb (%rdi), %ymm0, %ymm0
+; AVX2-NEXT: vpavgb 32(%rdi), %ymm1, %ymm1
; AVX2-NEXT: vmovdqu %ymm1, (%rax)
+; AVX2-NEXT: vmovdqu %ymm0, (%rax)
; AVX2-NEXT: vzeroupper
; AVX2-NEXT: retq
;
; AVX512F-LABEL: avg_v64i8:
; AVX512F: # %bb.0:
-; AVX512F-NEXT: vmovdqa 32(%rdi), %ymm0
-; AVX512F-NEXT: vmovdqa (%rsi), %ymm1
-; AVX512F-NEXT: vpavgb (%rdi), %ymm1, %ymm1
-; AVX512F-NEXT: vpavgb 32(%rsi), %ymm0, %ymm0
-; AVX512F-NEXT: vmovdqu %ymm0, (%rax)
+; AVX512F-NEXT: vmovdqa (%rsi), %ymm0
+; AVX512F-NEXT: vmovdqa 32(%rsi), %ymm1
+; AVX512F-NEXT: vpavgb (%rdi), %ymm0, %ymm0
+; AVX512F-NEXT: vpavgb 32(%rdi), %ymm1, %ymm1
; AVX512F-NEXT: vmovdqu %ymm1, (%rax)
+; AVX512F-NEXT: vmovdqu %ymm0, (%rax)
; AVX512F-NEXT: vzeroupper
; AVX512F-NEXT: retq
;
@@ -678,12 +678,12 @@ define void @avg_v8i16(<8 x i16>* %a, <8 x i16>* %b) nounwind {
define void @avg_v16i16(<16 x i16>* %a, <16 x i16>* %b) nounwind {
; SSE2-LABEL: avg_v16i16:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa 16(%rdi), %xmm0
-; SSE2-NEXT: movdqa (%rsi), %xmm1
-; SSE2-NEXT: pavgw (%rdi), %xmm1
-; SSE2-NEXT: pavgw 16(%rsi), %xmm0
-; SSE2-NEXT: movdqu %xmm0, (%rax)
+; SSE2-NEXT: movdqa (%rsi), %xmm0
+; SSE2-NEXT: movdqa 16(%rsi), %xmm1
+; SSE2-NEXT: pavgw (%rdi), %xmm0
+; SSE2-NEXT: pavgw 16(%rdi), %xmm1
; SSE2-NEXT: movdqu %xmm1, (%rax)
+; SSE2-NEXT: movdqu %xmm0, (%rax)
; SSE2-NEXT: retq
;
; AVX1-LABEL: avg_v16i16:
@@ -729,18 +729,18 @@ define void @avg_v16i16(<16 x i16>* %a, <16 x i16>* %b) nounwind {
define void @avg_v32i16(<32 x i16>* %a, <32 x i16>* %b) nounwind {
; SSE2-LABEL: avg_v32i16:
; SSE2: # %bb.0:
-; SSE2-NEXT: movdqa 32(%rdi), %xmm0
-; SSE2-NEXT: movdqa (%rsi), %xmm1
-; SSE2-NEXT: movdqa 16(%rsi), %xmm2
+; SSE2-NEXT: movdqa (%rsi), %xmm0
+; SSE2-NEXT: movdqa 16(%rsi), %xmm1
+; SSE2-NEXT: movdqa 32(%rsi), %xmm2
; SSE2-NEXT: movdqa 48(%rsi), %xmm3
-; SSE2-NEXT: pavgw (%rdi), %xmm1
-; SSE2-NEXT: pavgw 16(%rdi), %xmm2
-; SSE2-NEXT: pavgw 32(%rsi), %xmm0
+; SSE2-NEXT: pavgw (%rdi), %xmm0
+; SSE2-NEXT: pavgw 16(%rdi), %xmm1
+; SSE2-NEXT: pavgw 32(%rdi), %xmm2
; SSE2-NEXT: pavgw 48(%rdi), %xmm3
; SSE2-NEXT: movdqu %xmm3, (%rax)
-; SSE2-NEXT: movdqu %xmm0, (%rax)
; SSE2-NEXT: movdqu %xmm2, (%rax)
; SSE2-NEXT: movdqu %xmm1, (%rax)
+; SSE2-NEXT: movdqu %xmm0, (%rax)
; SSE2-NEXT: retq
;
; AVX1-LABEL: avg_v32i16:
@@ -766,23 +766,23 @@ define void @avg_v32i16(<32 x i16>* %a, <32 x i16>* %b) nounwind {
;
; AVX2-LABEL: avg_v32i16:
; AVX2: # %bb.0:
-; AVX2-NEXT: vmovdqa 32(%rdi), %ymm0
-; AVX2-NEXT: vmovdqa (%rsi), %ymm1
-; AVX2-NEXT: vpavgw (%rdi), %ymm1, %ymm1
-; AVX2-NEXT: vpavgw 32(%rsi), %ymm0, %ymm0
-; AVX2-NEXT: vmovdqu %ymm0, (%rax)
+; AVX2-NEXT: vmovdqa (%rsi), %ymm0
+; AVX2-NEXT: vmovdqa 32(%rsi), %ymm1
+; AVX2-NEXT: vpavgw (%rdi), %ymm0, %ymm0
+; AVX2-NEXT: vpavgw 32(%rdi), %ymm1, %ymm1
; AVX2-NEXT: vmovdqu %ymm1, (%rax)
+; AVX2-NEXT: vmovdqu %ymm0, (%rax)
; AVX2-NEXT: vzeroupper
; AVX2-NEXT: retq
;
; AVX512F-LABEL: avg_v32i16:
; AVX512F: # %bb.0:
-; AVX512F-NEXT: vmovdqa 32(%rdi), %ymm0
-; AVX512F-NEXT: vmovdqa (%rsi), %ymm1
-; AVX512F-NEXT: vpavgw (%rdi), %ymm1, %ymm1
-; AVX512F-NEXT: vpavgw 32(%rsi), %ymm0, %ymm0
-; AVX512F-NEXT: vmovdqu %ymm0, (%rax)
+; AVX512F-NEXT: vmovdqa (%rsi), %ymm0
+; AVX512F-NEXT: vmovdqa 32(%rsi), %ymm1
+; AVX512F-NEXT: vpavgw (%rdi), %ymm0, %ymm0
+; AVX512F-NEXT: vpavgw 32(%rdi), %ymm1, %ymm1
; AVX512F-NEXT: vmovdqu %ymm1, (%rax)
+; AVX512F-NEXT: vmovdqu %ymm0, (%rax)
; AVX512F-NEXT: vzeroupper
; AVX512F-NEXT: retq
;
@@ -891,9 +891,9 @@ define void @avg_v32i8_2(<32 x i8>* %a, <32 x i8>* %b) nounwind {
; SSE2-LABEL: avg_v32i8_2:
; SSE2: # %bb.0:
; SSE2-NEXT: movdqa (%rdi), %xmm0
-; SSE2-NEXT: movdqa 16(%rsi), %xmm1
+; SSE2-NEXT: movdqa 16(%rdi), %xmm1
; SSE2-NEXT: pavgb (%rsi), %xmm0
-; SSE2-NEXT: pavgb 16(%rdi), %xmm1
+; SSE2-NEXT: pavgb 16(%rsi), %xmm1
; SSE2-NEXT: movdqu %xmm1, (%rax)
; SSE2-NEXT: movdqu %xmm0, (%rax)
; SSE2-NEXT: retq
@@ -1072,9 +1072,9 @@ define void @avg_v16i16_2(<16 x i16>* %a, <16 x i16>* %b) nounwind {
; SSE2-LABEL: avg_v16i16_2:
; SSE2: # %bb.0:
; SSE2-NEXT: movdqa (%rdi), %xmm0
-; SSE2-NEXT: movdqa 16(%rsi), %xmm1
+; SSE2-NEXT: movdqa 16(%rdi), %xmm1
; SSE2-NEXT: pavgw (%rsi), %xmm0
-; SSE2-NEXT: pavgw 16(%rdi), %xmm1
+; SSE2-NEXT: pavgw 16(%rsi), %xmm1
; SSE2-NEXT: movdqu %xmm1, (%rax)
; SSE2-NEXT: movdqu %xmm0, (%rax)
; SSE2-NEXT: retq
@@ -1124,14 +1124,14 @@ define void @avg_v32i16_2(<32 x i16>* %a, <32 x i16>* %b) nounwind {
; SSE2: # %bb.0:
; SSE2-NEXT: movdqa (%rdi), %xmm0
; SSE2-NEXT: movdqa 16(%rdi), %xmm1
-; SSE2-NEXT: movdqa 48(%rdi), %xmm2
-; SSE2-NEXT: movdqa 32(%rsi), %xmm3
+; SSE2-NEXT: movdqa 32(%rdi), %xmm2
+; SSE2-NEXT: movdqa 48(%rdi), %xmm3
; SSE2-NEXT: pavgw (%rsi), %xmm0
; SSE2-NEXT: pavgw 16(%rsi), %xmm1
-; SSE2-NEXT: pavgw 32(%rdi), %xmm3
-; SSE2-NEXT: pavgw 48(%rsi), %xmm2
-; SSE2-NEXT: movdqu %xmm2, (%rax)
+; SSE2-NEXT: pavgw 32(%rsi), %xmm2
+; SSE2-NEXT: pavgw 48(%rsi), %xmm3
; SSE2-NEXT: movdqu %xmm3, (%rax)
+; SSE2-NEXT: movdqu %xmm2, (%rax)
; SSE2-NEXT: movdqu %xmm1, (%rax)
; SSE2-NEXT: movdqu %xmm0, (%rax)
; SSE2-NEXT: retq
@@ -1160,9 +1160,9 @@ define void @avg_v32i16_2(<32 x i16>* %a, <32 x i16>* %b) nounwind {
; AVX2-LABEL: avg_v32i16_2:
; AVX2: # %bb.0:
; AVX2-NEXT: vmovdqa (%rdi), %ymm0
-; AVX2-NEXT: vmovdqa 32(%rsi), %ymm1
+; AVX2-NEXT: vmovdqa 32(%rdi), %ymm1
; AVX2-NEXT: vpavgw (%rsi), %ymm0, %ymm0
-; AVX2-NEXT: vpavgw 32(%rdi), %ymm1, %ymm1
+; AVX2-NEXT: vpavgw 32(%rsi), %ymm1, %ymm1
; AVX2-NEXT: vmovdqu %ymm1, (%rax)
; AVX2-NEXT: vmovdqu %ymm0, (%rax)
; AVX2-NEXT: vzeroupper
@@ -1171,9 +1171,9 @@ define void @avg_v32i16_2(<32 x i16>* %a, <32 x i16>* %b) nounwind {
; AVX512F-LABEL: avg_v32i16_2:
; AVX512F: # %bb.0:
; AVX512F-NEXT: vmovdqa (%rdi), %ymm0
-; AVX512F-NEXT: vmovdqa 32(%rsi), %ymm1
+; AVX512F-NEXT: vmovdqa 32(%rdi), %ymm1
; AVX512F-NEXT: vpavgw (%rsi), %ymm0, %ymm0
-; AVX512F-NEXT: vpavgw 32(%rdi), %ymm1, %ymm1
+; AVX512F-NEXT: vpavgw 32(%rsi), %ymm1, %ymm1
; AVX512F-NEXT: vmovdqu %ymm1, (%rax)
; AVX512F-NEXT: vmovdqu %ymm0, (%rax)
; AVX512F-NEXT: vzeroupper
diff --git a/test/CodeGen/X86/avx-vbroadcastf128.ll b/test/CodeGen/X86/avx-vbroadcastf128.ll
index 7fdbf31a993..b5026437153 100644
--- a/test/CodeGen/X86/avx-vbroadcastf128.ll
+++ b/test/CodeGen/X86/avx-vbroadcastf128.ll
@@ -235,18 +235,16 @@ define <8 x i32> @PR29088(<4 x i32>* %p0, <8 x float>* %p1) {
; X32: # %bb.0:
; X32-NEXT: movl {{[0-9]+}}(%esp), %eax
; X32-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-NEXT: vmovaps (%ecx), %xmm0
; X32-NEXT: vxorps %xmm1, %xmm1, %xmm1
+; X32-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
; X32-NEXT: vmovaps %ymm1, (%eax)
-; X32-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
; X32-NEXT: retl
;
; X64-LABEL: PR29088:
; X64: # %bb.0:
-; X64-NEXT: vmovaps (%rdi), %xmm0
; X64-NEXT: vxorps %xmm1, %xmm1, %xmm1
+; X64-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
; X64-NEXT: vmovaps %ymm1, (%rsi)
-; X64-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
; X64-NEXT: retq
%ld = load <4 x i32>, <4 x i32>* %p0
store <8 x float> zeroinitializer, <8 x float>* %p1
diff --git a/test/CodeGen/X86/avx2-vbroadcast.ll b/test/CodeGen/X86/avx2-vbroadcast.ll
index e5506257e4c..3ae6c0b9d81 100644
--- a/test/CodeGen/X86/avx2-vbroadcast.ll
+++ b/test/CodeGen/X86/avx2-vbroadcast.ll
@@ -189,12 +189,7 @@ define <2 x i64> @Q64(i64* %ptr) nounwind uwtable readnone ssp {
; X32-LABEL: Q64:
; X32: ## %bb.0: ## %entry
; X32-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-NEXT: movl (%eax), %ecx
-; X32-NEXT: movl 4(%eax), %eax
-; X32-NEXT: vmovd %ecx, %xmm0
-; X32-NEXT: vpinsrd $1, %eax, %xmm0, %xmm0
-; X32-NEXT: vpinsrd $2, %ecx, %xmm0, %xmm0
-; X32-NEXT: vpinsrd $3, %eax, %xmm0, %xmm0
+; X32-NEXT: vpbroadcastq (%eax), %xmm0
; X32-NEXT: retl
;
; X64-LABEL: Q64:
@@ -212,13 +207,8 @@ define <4 x i64> @QQ64(i64* %ptr) nounwind uwtable readnone ssp {
; X32-LABEL: QQ64:
; X32: ## %bb.0: ## %entry
; X32-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-NEXT: movl (%eax), %ecx
-; X32-NEXT: movl 4(%eax), %eax
-; X32-NEXT: vmovd %ecx, %xmm0
-; X32-NEXT: vpinsrd $1, %eax, %xmm0, %xmm0
-; X32-NEXT: vpinsrd $2, %ecx, %xmm0, %xmm0
-; X32-NEXT: vpinsrd $3, %eax, %xmm0, %xmm0
-; X32-NEXT: vinserti128 $1, %xmm0, %ymm0, %ymm0
+; X32-NEXT: vmovsd {{.*#+}} xmm0 = mem[0],zero
+; X32-NEXT: vbroadcastsd %xmm0, %ymm0
; X32-NEXT: retl
;
; X64-LABEL: QQ64:
@@ -1075,9 +1065,7 @@ define void @isel_crash_16b(i8* %cV_R.addr) {
; X64: ## %bb.0: ## %eintry
; X64-NEXT: vxorps %xmm0, %xmm0, %xmm0
; X64-NEXT: vmovaps %xmm0, -{{[0-9]+}}(%rsp)
-; X64-NEXT: movb (%rdi), %al
-; X64-NEXT: vmovd %eax, %xmm1
-; X64-NEXT: vpbroadcastb %xmm1, %xmm1
+; X64-NEXT: vpbroadcastb (%rdi), %xmm1
; X64-NEXT: vmovaps %xmm0, -{{[0-9]+}}(%rsp)
; X64-NEXT: vmovdqa %xmm1, -{{[0-9]+}}(%rsp)
; X64-NEXT: retq
@@ -1128,9 +1116,7 @@ define void @isel_crash_32b(i8* %cV_R.addr) {
; X64-NEXT: subq $128, %rsp
; X64-NEXT: vxorps %xmm0, %xmm0, %xmm0
; X64-NEXT: vmovaps %ymm0, (%rsp)
-; X64-NEXT: movb (%rdi), %al
-; X64-NEXT: vmovd %eax, %xmm1
-; X64-NEXT: vpbroadcastb %xmm1, %ymm1
+; X64-NEXT: vpbroadcastb (%rdi), %ymm1
; X64-NEXT: vmovaps %ymm0, {{[0-9]+}}(%rsp)
; X64-NEXT: vmovdqa %ymm1, {{[0-9]+}}(%rsp)
; X64-NEXT: movq %rbp, %rsp
@@ -1170,9 +1156,7 @@ define void @isel_crash_8w(i16* %cV_R.addr) {
; X64: ## %bb.0: ## %entry
; X64-NEXT: vxorps %xmm0, %xmm0, %xmm0
; X64-NEXT: vmovaps %xmm0, -{{[0-9]+}}(%rsp)
-; X64-NEXT: movzwl (%rdi), %eax
-; X64-NEXT: vmovd %eax, %xmm1
-; X64-NEXT: vpbroadcastw %xmm1, %xmm1
+; X64-NEXT: vpbroadcastw (%rdi), %xmm1
; X64-NEXT: vmovaps %xmm0, -{{[0-9]+}}(%rsp)
; X64-NEXT: vmovdqa %xmm1, -{{[0-9]+}}(%rsp)
; X64-NEXT: retq
@@ -1223,9 +1207,7 @@ define void @isel_crash_16w(i16* %cV_R.addr) {
; X64-NEXT: subq $128, %rsp
; X64-NEXT: vxorps %xmm0, %xmm0, %xmm0
; X64-NEXT: vmovaps %ymm0, (%rsp)
-; X64-NEXT: movzwl (%rdi), %eax
-; X64-NEXT: vmovd %eax, %xmm1
-; X64-NEXT: vpbroadcastw %xmm1, %ymm1
+; X64-NEXT: vpbroadcastw (%rdi), %ymm1
; X64-NEXT: vmovaps %ymm0, {{[0-9]+}}(%rsp)
; X64-NEXT: vmovdqa %ymm1, {{[0-9]+}}(%rsp)
; X64-NEXT: movq %rbp, %rsp
@@ -1261,26 +1243,14 @@ define void @isel_crash_4d(i32* %cV_R.addr) {
; X32-NEXT: addl $60, %esp
; X32-NEXT: retl
;
-; X64-AVX2-LABEL: isel_crash_4d:
-; X64-AVX2: ## %bb.0: ## %entry
-; X64-AVX2-NEXT: vxorps %xmm0, %xmm0, %xmm0
-; X64-AVX2-NEXT: vmovaps %xmm0, -{{[0-9]+}}(%rsp)
-; X64-AVX2-NEXT: movl (%rdi), %eax
-; X64-AVX2-NEXT: vmovd %eax, %xmm1
-; X64-AVX2-NEXT: vpbroadcastd %xmm1, %xmm1
-; X64-AVX2-NEXT: vmovaps %xmm0, -{{[0-9]+}}(%rsp)
-; X64-AVX2-NEXT: vmovdqa %xmm1, -{{[0-9]+}}(%rsp)
-; X64-AVX2-NEXT: retq
-;
-; X64-AVX512VL-LABEL: isel_crash_4d:
-; X64-AVX512VL: ## %bb.0: ## %entry
-; X64-AVX512VL-NEXT: vxorps %xmm0, %xmm0, %xmm0
-; X64-AVX512VL-NEXT: vmovaps %xmm0, -{{[0-9]+}}(%rsp)
-; X64-AVX512VL-NEXT: movl (%rdi), %eax
-; X64-AVX512VL-NEXT: vpbroadcastd %eax, %xmm1
-; X64-AVX512VL-NEXT: vmovaps %xmm0, -{{[0-9]+}}(%rsp)
-; X64-AVX512VL-NEXT: vmovdqa %xmm1, -{{[0-9]+}}(%rsp)
-; X64-AVX512VL-NEXT: retq
+; X64-LABEL: isel_crash_4d:
+; X64: ## %bb.0: ## %entry
+; X64-NEXT: vxorps %xmm0, %xmm0, %xmm0
+; X64-NEXT: vmovaps %xmm0, -{{[0-9]+}}(%rsp)
+; X64-NEXT: vbroadcastss (%rdi), %xmm1
+; X64-NEXT: vmovaps %xmm0, -{{[0-9]+}}(%rsp)
+; X64-NEXT: vmovaps %xmm1, -{{[0-9]+}}(%rsp)
+; X64-NEXT: retq
entry:
%__a.addr.i = alloca <2 x i64>, align 16
%__b.addr.i = alloca <2 x i64>, align 16
@@ -1317,46 +1287,24 @@ define void @isel_crash_8d(i32* %cV_R.addr) {
; X32-NEXT: vzeroupper
; X32-NEXT: retl
;
-; X64-AVX2-LABEL: isel_crash_8d:
-; X64-AVX2: ## %bb.0: ## %eintry
-; X64-AVX2-NEXT: pushq %rbp
-; X64-AVX2-NEXT: .cfi_def_cfa_offset 16
-; X64-AVX2-NEXT: .cfi_offset %rbp, -16
-; X64-AVX2-NEXT: movq %rsp, %rbp
-; X64-AVX2-NEXT: .cfi_def_cfa_register %rbp
-; X64-AVX2-NEXT: andq $-32, %rsp
-; X64-AVX2-NEXT: subq $128, %rsp
-; X64-AVX2-NEXT: vxorps %xmm0, %xmm0, %xmm0
-; X64-AVX2-NEXT: vmovaps %ymm0, (%rsp)
-; X64-AVX2-NEXT: movl (%rdi), %eax
-; X64-AVX2-NEXT: vmovd %eax, %xmm1
-; X64-AVX2-NEXT: vpbroadcastd %xmm1, %ymm1
-; X64-AVX2-NEXT: vmovaps %ymm0, {{[0-9]+}}(%rsp)
-; X64-AVX2-NEXT: vmovdqa %ymm1, {{[0-9]+}}(%rsp)
-; X64-AVX2-NEXT: movq %rbp, %rsp
-; X64-AVX2-NEXT: popq %rbp
-; X64-AVX2-NEXT: vzeroupper
-; X64-AVX2-NEXT: retq
-;
-; X64-AVX512VL-LABEL: isel_crash_8d:
-; X64-AVX512VL: ## %bb.0: ## %eintry
-; X64-AVX512VL-NEXT: pushq %rbp
-; X64-AVX512VL-NEXT: .cfi_def_cfa_offset 16
-; X64-AVX512VL-NEXT: .cfi_offset %rbp, -16
-; X64-AVX512VL-NEXT: movq %rsp, %rbp
-; X64-AVX512VL-NEXT: .cfi_def_cfa_register %rbp
-; X64-AVX512VL-NEXT: andq $-32, %rsp
-; X64-AVX512VL-NEXT: subq $128, %rsp
-; X64-AVX512VL-NEXT: vxorps %xmm0, %xmm0, %xmm0
-; X64-AVX512VL-NEXT: vmovaps %ymm0, (%rsp)
-; X64-AVX512VL-NEXT: movl (%rdi), %eax
-; X64-AVX512VL-NEXT: vpbroadcastd %eax, %ymm1
-; X64-AVX512VL-NEXT: vmovaps %ymm0, {{[0-9]+}}(%rsp)
-; X64-AVX512VL-NEXT: vmovdqa %ymm1, {{[0-9]+}}(%rsp)
-; X64-AVX512VL-NEXT: movq %rbp, %rsp
-; X64-AVX512VL-NEXT: popq %rbp
-; X64-AVX512VL-NEXT: vzeroupper
-; X64-AVX512VL-NEXT: retq
+; X64-LABEL: isel_crash_8d:
+; X64: ## %bb.0: ## %eintry
+; X64-NEXT: pushq %rbp
+; X64-NEXT: .cfi_def_cfa_offset 16
+; X64-NEXT: .cfi_offset %rbp, -16
+; X64-NEXT: movq %rsp, %rbp
+; X64-NEXT: .cfi_def_cfa_register %rbp
+; X64-NEXT: andq $-32, %rsp
+; X64-NEXT: subq $128, %rsp
+; X64-NEXT: vxorps %xmm0, %xmm0, %xmm0
+; X64-NEXT: vmovaps %ymm0, (%rsp)
+; X64-NEXT: vbroadcastss (%rdi), %ymm1
+; X64-NEXT: vmovaps %ymm0, {{[0-9]+}}(%rsp)
+; X64-NEXT: vmovaps %ymm1, {{[0-9]+}}(%rsp)
+; X64-NEXT: movq %rbp, %rsp
+; X64-NEXT: popq %rbp
+; X64-NEXT: vzeroupper
+; X64-NEXT: retq
eintry:
%__a.addr.i = alloca <4 x i64>, align 16
%__b.addr.i = alloca <4 x i64>, align 16
@@ -1380,37 +1328,20 @@ define void @isel_crash_2q(i64* %cV_R.addr) {
; X32-NEXT: movl {{[0-9]+}}(%esp), %eax
; X32-NEXT: vxorps %xmm0, %xmm0, %xmm0
; X32-NEXT: vmovaps %xmm0, (%esp)
-; X32-NEXT: movl (%eax), %ecx
-; X32-NEXT: movl 4(%eax), %eax
-; X32-NEXT: vmovd %ecx, %xmm1
-; X32-NEXT: vpinsrd $1, %eax, %xmm1, %xmm1
-; X32-NEXT: vpinsrd $2, %ecx, %xmm1, %xmm1
-; X32-NEXT: vpinsrd $3, %eax, %xmm1, %xmm1
+; X32-NEXT: vpbroadcastq (%eax), %xmm1
; X32-NEXT: vmovaps %xmm0, {{[0-9]+}}(%esp)
; X32-NEXT: vmovdqa %xmm1, {{[0-9]+}}(%esp)
; X32-NEXT: addl $60, %esp
; X32-NEXT: retl
;
-; X64-AVX2-LABEL: isel_crash_2q:
-; X64-AVX2: ## %bb.0: ## %entry
-; X64-AVX2-NEXT: vxorps %xmm0, %xmm0, %xmm0
-; X64-AVX2-NEXT: vmovaps %xmm0, -{{[0-9]+}}(%rsp)
-; X64-AVX2-NEXT: movq (%rdi), %rax
-; X64-AVX2-NEXT: vmovq %rax, %xmm1
-; X64-AVX2-NEXT: vpbroadcastq %xmm1, %xmm1
-; X64-AVX2-NEXT: vmovaps %xmm0, -{{[0-9]+}}(%rsp)
-; X64-AVX2-NEXT: vmovdqa %xmm1, -{{[0-9]+}}(%rsp)
-; X64-AVX2-NEXT: retq
-;
-; X64-AVX512VL-LABEL: isel_crash_2q:
-; X64-AVX512VL: ## %bb.0: ## %entry
-; X64-AVX512VL-NEXT: vxorps %xmm0, %xmm0, %xmm0
-; X64-AVX512VL-NEXT: vmovaps %xmm0, -{{[0-9]+}}(%rsp)
-; X64-AVX512VL-NEXT: movq (%rdi), %rax
-; X64-AVX512VL-NEXT: vpbroadcastq %rax, %xmm1
-; X64-AVX512VL-NEXT: vmovaps %xmm0, -{{[0-9]+}}(%rsp)
-; X64-AVX512VL-NEXT: vmovdqa %xmm1, -{{[0-9]+}}(%rsp)
-; X64-AVX512VL-NEXT: retq
+; X64-LABEL: isel_crash_2q:
+; X64: ## %bb.0: ## %entry
+; X64-NEXT: vxorps %xmm0, %xmm0, %xmm0
+; X64-NEXT: vmovaps %xmm0, -{{[0-9]+}}(%rsp)
+; X64-NEXT: vpbroadcastq (%rdi), %xmm1
+; X64-NEXT: vmovaps %xmm0, -{{[0-9]+}}(%rsp)
+; X64-NEXT: vmovdqa %xmm1, -{{[0-9]+}}(%rsp)
+; X64-NEXT: retq
entry:
%__a.addr.i = alloca <2 x i64>, align 16
%__b.addr.i = alloca <2 x i64>, align 16
@@ -1438,60 +1369,33 @@ define void @isel_crash_4q(i64* %cV_R.addr) {
; X32-NEXT: movl 8(%ebp), %eax
; X32-NEXT: vxorps %xmm0, %xmm0, %xmm0
; X32-NEXT: vmovaps %ymm0, (%esp)
-; X32-NEXT: movl (%eax), %ecx
-; X32-NEXT: movl 4(%eax), %eax
-; X32-NEXT: vmovd %ecx, %xmm1
-; X32-NEXT: vpinsrd $1, %eax, %xmm1, %xmm1
-; X32-NEXT: vpinsrd $2, %ecx, %xmm1, %xmm1
-; X32-NEXT: vpinsrd $3, %eax, %xmm1, %xmm1
-; X32-NEXT: vinserti128 $1, %xmm1, %ymm1, %ymm1
+; X32-NEXT: vmovsd {{.*#+}} xmm1 = mem[0],zero
+; X32-NEXT: vbroadcastsd %xmm1, %ymm1
; X32-NEXT: vmovaps %ymm0, {{[0-9]+}}(%esp)
-; X32-NEXT: vmovdqa %ymm1, {{[0-9]+}}(%esp)
+; X32-NEXT: vmovaps %ymm1, {{[0-9]+}}(%esp)
; X32-NEXT: movl %ebp, %esp
; X32-NEXT: popl %ebp
; X32-NEXT: vzeroupper
; X32-NEXT: retl
;
-; X64-AVX2-LABEL: isel_crash_4q:
-; X64-AVX2: ## %bb.0: ## %eintry
-; X64-AVX2-NEXT: pushq %rbp
-; X64-AVX2-NEXT: .cfi_def_cfa_offset 16
-; X64-AVX2-NEXT: .cfi_offset %rbp, -16
-; X64-AVX2-NEXT: movq %rsp, %rbp
-; X64-AVX2-NEXT: .cfi_def_cfa_register %rbp
-; X64-AVX2-NEXT: andq $-32, %rsp
-; X64-AVX2-NEXT: subq $128, %rsp
-; X64-AVX2-NEXT: vxorps %xmm0, %xmm0, %xmm0
-; X64-AVX2-NEXT: vmovaps %ymm0, (%rsp)
-; X64-AVX2-NEXT: movq (%rdi), %rax
-; X64-AVX2-NEXT: vmovq %rax, %xmm1
-; X64-AVX2-NEXT: vpbroadcastq %xmm1, %ymm1
-; X64-AVX2-NEXT: vmovaps %ymm0, {{[0-9]+}}(%rsp)
-; X64-AVX2-NEXT: vmovdqa %ymm1, {{[0-9]+}}(%rsp)
-; X64-AVX2-NEXT: movq %rbp, %rsp
-; X64-AVX2-NEXT: popq %rbp
-; X64-AVX2-NEXT: vzeroupper
-; X64-AVX2-NEXT: retq
-;
-; X64-AVX512VL-LABEL: isel_crash_4q:
-; X64-AVX512VL: ## %bb.0: ## %eintry
-; X64-AVX512VL-NEXT: pushq %rbp
-; X64-AVX512VL-NEXT: .cfi_def_cfa_offset 16
-; X64-AVX512VL-NEXT: .cfi_offset %rbp, -16
-; X64-AVX512VL-NEXT: movq %rsp, %rbp
-; X64-AVX512VL-NEXT: .cfi_def_cfa_register %rbp
-; X64-AVX512VL-NEXT: andq $-32, %rsp
-; X64-AVX512VL-NEXT: subq $128, %rsp
-; X64-AVX512VL-NEXT: vxorps %xmm0, %xmm0, %xmm0
-; X64-AVX512VL-NEXT: vmovaps %ymm0, (%rsp)
-; X64-AVX512VL-NEXT: movq (%rdi), %rax
-; X64-AVX512VL-NEXT: vpbroadcastq %rax, %ymm1
-; X64-AVX512VL-NEXT: vmovaps %ymm0, {{[0-9]+}}(%rsp)
-; X64-AVX512VL-NEXT: vmovdqa %ymm1, {{[0-9]+}}(%rsp)
-; X64-AVX512VL-NEXT: movq %rbp, %rsp
-; X64-AVX512VL-NEXT: popq %rbp
-; X64-AVX512VL-NEXT: vzeroupper
-; X64-AVX512VL-NEXT: retq
+; X64-LABEL: isel_crash_4q:
+; X64: ## %bb.0: ## %eintry
+; X64-NEXT: pushq %rbp
+; X64-NEXT: .cfi_def_cfa_offset 16
+; X64-NEXT: .cfi_offset %rbp, -16
+; X64-NEXT: movq %rsp, %rbp
+; X64-NEXT: .cfi_def_cfa_register %rbp
+; X64-NEXT: andq $-32, %rsp
+; X64-NEXT: subq $128, %rsp
+; X64-NEXT: vxorps %xmm0, %xmm0, %xmm0
+; X64-NEXT: vmovaps %ymm0, (%rsp)
+; X64-NEXT: vbroadcastsd (%rdi), %ymm1
+; X64-NEXT: vmovaps %ymm0, {{[0-9]+}}(%rsp)
+; X64-NEXT: vmovaps %ymm1, {{[0-9]+}}(%rsp)
+; X64-NEXT: movq %rbp, %rsp
+; X64-NEXT: popq %rbp
+; X64-NEXT: vzeroupper
+; X64-NEXT: retq
eintry:
%__a.addr.i = alloca <4 x i64>, align 16
%__b.addr.i = alloca <4 x i64>, align 16
diff --git a/test/CodeGen/X86/avx2-vbroadcasti128.ll b/test/CodeGen/X86/avx2-vbroadcasti128.ll
index 254cdfdd8cb..996e6796616 100644
--- a/test/CodeGen/X86/avx2-vbroadcasti128.ll
+++ b/test/CodeGen/X86/avx2-vbroadcasti128.ll
@@ -271,18 +271,16 @@ define <8 x i32> @PR29088(<4 x i32>* %p0, <8 x float>* %p1) {
; X32: # %bb.0:
; X32-NEXT: movl {{[0-9]+}}(%esp), %eax
; X32-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-NEXT: vmovaps (%ecx), %xmm0
; X32-NEXT: vxorps %xmm1, %xmm1, %xmm1
+; X32-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
; X32-NEXT: vmovaps %ymm1, (%eax)
-; X32-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
; X32-NEXT: retl
;
; X64-LABEL: PR29088:
; X64: # %bb.0:
-; X64-NEXT: vmovaps (%rdi), %xmm0
; X64-NEXT: vxorps %xmm1, %xmm1, %xmm1
+; X64-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
; X64-NEXT: vmovaps %ymm1, (%rsi)
-; X64-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
; X64-NEXT: retq
%ld = load <4 x i32>, <4 x i32>* %p0
store <8 x float> zeroinitializer, <8 x float>* %p1
diff --git a/test/CodeGen/X86/avx512-intrinsics-fast-isel.ll b/test/CodeGen/X86/avx512-intrinsics-fast-isel.ll
index 80127f66bdf..8ebbbd4b49f 100644
--- a/test/CodeGen/X86/avx512-intrinsics-fast-isel.ll
+++ b/test/CodeGen/X86/avx512-intrinsics-fast-isel.ll
@@ -435,16 +435,11 @@ entry:
define <8 x i64> @test_mm512_mask_set1_epi64(<8 x i64> %__O, i8 zeroext %__M, i64 %__A) {
; X32-LABEL: test_mm512_mask_set1_epi64:
; X32: # %bb.0: # %entry
-; X32-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-NEXT: movl {{[0-9]+}}(%esp), %edx
; X32-NEXT: movb {{[0-9]+}}(%esp), %al
-; X32-NEXT: vmovd %edx, %xmm1
-; X32-NEXT: vpinsrd $1, %ecx, %xmm1, %xmm1
-; X32-NEXT: vpinsrd $2, %edx, %xmm1, %xmm1
-; X32-NEXT: vpinsrd $3, %ecx, %xmm1, %xmm1
-; X32-NEXT: vinserti128 $1, %xmm1, %ymm1, %ymm1
+; X32-NEXT: vmovd {{.*#+}} xmm1 = mem[0],zero,zero,zero
+; X32-NEXT: vpinsrd $1, {{[0-9]+}}(%esp), %xmm1, %xmm1
; X32-NEXT: kmovw %eax, %k1
-; X32-NEXT: vinserti64x4 $1, %ymm1, %zmm1, %zmm0 {%k1}
+; X32-NEXT: vpbroadcastq %xmm1, %zmm0 {%k1}
; X32-NEXT: retl
;
; X64-LABEL: test_mm512_mask_set1_epi64:
@@ -463,16 +458,11 @@ entry:
define <8 x i64> @test_mm512_maskz_set1_epi64(i8 zeroext %__M, i64 %__A) {
; X32-LABEL: test_mm512_maskz_set1_epi64:
; X32: # %bb.0: # %entry
-; X32-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-NEXT: movl {{[0-9]+}}(%esp), %edx
; X32-NEXT: movb {{[0-9]+}}(%esp), %al
-; X32-NEXT: vmovd %edx, %xmm0
-; X32-NEXT: vpinsrd $1, %ecx, %xmm0, %xmm0
-; X32-NEXT: vpinsrd $2, %edx, %xmm0, %xmm0
-; X32-NEXT: vpinsrd $3, %ecx, %xmm0, %xmm0
-; X32-NEXT: vinserti128 $1, %xmm0, %ymm0, %ymm0
+; X32-NEXT: vmovd {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X32-NEXT: vpinsrd $1, {{[0-9]+}}(%esp), %xmm0, %xmm0
; X32-NEXT: kmovw %eax, %k1
-; X32-NEXT: vinserti64x4 $1, %ymm0, %zmm0, %zmm0 {%k1} {z}
+; X32-NEXT: vpbroadcastq %xmm0, %zmm0 {%k1} {z}
; X32-NEXT: retl
;
; X64-LABEL: test_mm512_maskz_set1_epi64:
diff --git a/test/CodeGen/X86/avx512-vbroadcasti128.ll b/test/CodeGen/X86/avx512-vbroadcasti128.ll
index c5ecb1559b4..2bf69cfadcf 100644
--- a/test/CodeGen/X86/avx512-vbroadcasti128.ll
+++ b/test/CodeGen/X86/avx512-vbroadcasti128.ll
@@ -186,26 +186,23 @@ define <64 x i8> @test_broadcast_16i8_64i8(<16 x i8> *%p) nounwind {
define <8 x i32> @PR29088(<4 x i32>* %p0, <8 x float>* %p1) {
; X64-AVX512VL-LABEL: PR29088:
; X64-AVX512VL: ## %bb.0:
-; X64-AVX512VL-NEXT: vmovaps (%rdi), %xmm0
; X64-AVX512VL-NEXT: vpxor %xmm1, %xmm1, %xmm1
+; X64-AVX512VL-NEXT: vbroadcasti128 {{.*#+}} ymm0 = mem[0,1,0,1]
; X64-AVX512VL-NEXT: vmovdqa %ymm1, (%rsi)
-; X64-AVX512VL-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
; X64-AVX512VL-NEXT: retq
;
; X64-AVX512BWVL-LABEL: PR29088:
; X64-AVX512BWVL: ## %bb.0:
-; X64-AVX512BWVL-NEXT: vmovaps (%rdi), %xmm0
; X64-AVX512BWVL-NEXT: vpxor %xmm1, %xmm1, %xmm1
+; X64-AVX512BWVL-NEXT: vbroadcasti128 {{.*#+}} ymm0 = mem[0,1,0,1]
; X64-AVX512BWVL-NEXT: vmovdqa %ymm1, (%rsi)
-; X64-AVX512BWVL-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
; X64-AVX512BWVL-NEXT: retq
;
; X64-AVX512DQVL-LABEL: PR29088:
; X64-AVX512DQVL: ## %bb.0:
-; X64-AVX512DQVL-NEXT: vmovaps (%rdi), %xmm0
; X64-AVX512DQVL-NEXT: vxorps %xmm1, %xmm1, %xmm1
+; X64-AVX512DQVL-NEXT: vbroadcasti128 {{.*#+}} ymm0 = mem[0,1,0,1]
; X64-AVX512DQVL-NEXT: vmovaps %ymm1, (%rsi)
-; X64-AVX512DQVL-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
; X64-AVX512DQVL-NEXT: retq
%ld = load <4 x i32>, <4 x i32>* %p0
store <8 x float> zeroinitializer, <8 x float>* %p1
diff --git a/test/CodeGen/X86/avx512vl-intrinsics-fast-isel.ll b/test/CodeGen/X86/avx512vl-intrinsics-fast-isel.ll
index 8c13d4b842f..a2d275c1109 100644
--- a/test/CodeGen/X86/avx512vl-intrinsics-fast-isel.ll
+++ b/test/CodeGen/X86/avx512vl-intrinsics-fast-isel.ll
@@ -797,16 +797,11 @@ entry:
define <4 x i64> @test_mm256_mask_set1_epi64(<4 x i64> %__O, i8 zeroext %__M, i64 %__A) {
; X32-LABEL: test_mm256_mask_set1_epi64:
; X32: # %bb.0: # %entry
-; X32-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-NEXT: movb {{[0-9]+}}(%esp), %dl
-; X32-NEXT: vmovd %ecx, %xmm1
-; X32-NEXT: vpinsrd $1, %eax, %xmm1, %xmm1
-; X32-NEXT: vpinsrd $2, %ecx, %xmm1, %xmm1
-; X32-NEXT: vpinsrd $3, %eax, %xmm1, %xmm1
-; X32-NEXT: vinserti128 $1, %xmm1, %ymm1, %ymm1
-; X32-NEXT: kmovw %edx, %k1
-; X32-NEXT: vmovdqa64 %ymm1, %ymm0 {%k1}
+; X32-NEXT: movb {{[0-9]+}}(%esp), %al
+; X32-NEXT: vmovd {{.*#+}} xmm1 = mem[0],zero,zero,zero
+; X32-NEXT: vpinsrd $1, {{[0-9]+}}(%esp), %xmm1, %xmm1
+; X32-NEXT: kmovw %eax, %k1
+; X32-NEXT: vpbroadcastq %xmm1, %ymm0 {%k1}
; X32-NEXT: retl
;
; X64-LABEL: test_mm256_mask_set1_epi64:
@@ -826,16 +821,11 @@ entry:
define <4 x i64> @test_mm256_maskz_set1_epi64(i8 zeroext %__M, i64 %__A) {
; X32-LABEL: test_mm256_maskz_set1_epi64:
; X32: # %bb.0: # %entry
-; X32-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-NEXT: movb {{[0-9]+}}(%esp), %dl
-; X32-NEXT: vmovd %ecx, %xmm0
-; X32-NEXT: vpinsrd $1, %eax, %xmm0, %xmm0
-; X32-NEXT: vpinsrd $2, %ecx, %xmm0, %xmm0
-; X32-NEXT: vpinsrd $3, %eax, %xmm0, %xmm0
-; X32-NEXT: vinserti128 $1, %xmm0, %ymm0, %ymm0
-; X32-NEXT: kmovw %edx, %k1
-; X32-NEXT: vmovdqa64 %ymm0, %ymm0 {%k1} {z}
+; X32-NEXT: movb {{[0-9]+}}(%esp), %al
+; X32-NEXT: vmovd {{.*#+}} xmm0 = mem[0],zero,zero,zero
+; X32-NEXT: vpinsrd $1, {{[0-9]+}}(%esp), %xmm0, %xmm0
+; X32-NEXT: kmovw %eax, %k1
+; X32-NEXT: vpbroadcastq %xmm0, %ymm0 {%k1} {z}
; X32-NEXT: retl
;
; X64-LABEL: test_mm256_maskz_set1_epi64:
diff --git a/test/CodeGen/X86/broadcastm-lowering.ll b/test/CodeGen/X86/broadcastm-lowering.ll
index 428eaa19497..664f3b2eba6 100644
--- a/test/CodeGen/X86/broadcastm-lowering.ll
+++ b/test/CodeGen/X86/broadcastm-lowering.ll
@@ -122,9 +122,7 @@ define <8 x i64> @test_mm512_epi64(<8 x i32> %a, <8 x i32> %b) {
; X86-AVX512VLCDBW-NEXT: kmovd %k0, %eax
; X86-AVX512VLCDBW-NEXT: movzbl %al, %eax
; X86-AVX512VLCDBW-NEXT: vmovd %eax, %xmm0
-; X86-AVX512VLCDBW-NEXT: vpbroadcastq %xmm0, %xmm0
-; X86-AVX512VLCDBW-NEXT: vinserti128 $1, %xmm0, %ymm0, %ymm0
-; X86-AVX512VLCDBW-NEXT: vinserti64x4 $1, %ymm0, %zmm0, %zmm0
+; X86-AVX512VLCDBW-NEXT: vpbroadcastq %xmm0, %zmm0
; X86-AVX512VLCDBW-NEXT: retl
entry:
%0 = icmp eq <8 x i32> %a, %b
@@ -160,8 +158,7 @@ define <4 x i64> @test_mm256_epi64(<8 x i32> %a, <8 x i32> %b) {
; X86-AVX512VLCDBW-NEXT: kmovd %k0, %eax
; X86-AVX512VLCDBW-NEXT: movzbl %al, %eax
; X86-AVX512VLCDBW-NEXT: vmovd %eax, %xmm0
-; X86-AVX512VLCDBW-NEXT: vpbroadcastq %xmm0, %xmm0
-; X86-AVX512VLCDBW-NEXT: vinserti128 $1, %xmm0, %ymm0, %ymm0
+; X86-AVX512VLCDBW-NEXT: vpbroadcastq %xmm0, %ymm0
; X86-AVX512VLCDBW-NEXT: retl
entry:
%0 = icmp eq <8 x i32> %a, %b
diff --git a/test/CodeGen/X86/i256-add.ll b/test/CodeGen/X86/i256-add.ll
deleted file mode 100644
index 36d838a68cb..00000000000
--- a/test/CodeGen/X86/i256-add.ll
+++ /dev/null
@@ -1,135 +0,0 @@
-; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
-; RUN: llc < %s -mtriple=i386-unknown | FileCheck %s --check-prefix=X32
-; RUN: llc < %s -mtriple=x86_64-unknown | FileCheck %s --check-prefix=X64
-
-define void @add(i256* %p, i256* %q) nounwind {
-; X32-LABEL: add:
-; X32: # %bb.0:
-; X32-NEXT: pushl %ebp
-; X32-NEXT: pushl %ebx
-; X32-NEXT: pushl %edi
-; X32-NEXT: pushl %esi
-; X32-NEXT: subl $12, %esp
-; X32-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-NEXT: movl 8(%ecx), %edi
-; X32-NEXT: movl (%ecx), %edx
-; X32-NEXT: movl 4(%ecx), %ebx
-; X32-NEXT: movl 28(%eax), %esi
-; X32-NEXT: movl %esi, {{[0-9]+}}(%esp) # 4-byte Spill
-; X32-NEXT: movl 24(%eax), %ebp
-; X32-NEXT: addl (%eax), %edx
-; X32-NEXT: movl %edx, {{[0-9]+}}(%esp) # 4-byte Spill
-; X32-NEXT: adcl 4(%eax), %ebx
-; X32-NEXT: adcl 8(%eax), %edi
-; X32-NEXT: movl %edi, (%esp) # 4-byte Spill
-; X32-NEXT: movl 20(%eax), %edi
-; X32-NEXT: movl 12(%eax), %edx
-; X32-NEXT: movl 16(%eax), %esi
-; X32-NEXT: adcl 12(%ecx), %edx
-; X32-NEXT: adcl 16(%ecx), %esi
-; X32-NEXT: adcl 20(%ecx), %edi
-; X32-NEXT: movl %ebp, %eax
-; X32-NEXT: adcl 24(%ecx), %eax
-; X32-NEXT: movl {{[0-9]+}}(%esp), %ebp # 4-byte Reload
-; X32-NEXT: adcl %ebp, 28(%ecx)
-; X32-NEXT: movl (%esp), %ebp # 4-byte Reload
-; X32-NEXT: movl %ebp, 8(%ecx)
-; X32-NEXT: movl %ebx, 4(%ecx)
-; X32-NEXT: movl {{[0-9]+}}(%esp), %ebx # 4-byte Reload
-; X32-NEXT: movl %ebx, (%ecx)
-; X32-NEXT: movl %edx, 12(%ecx)
-; X32-NEXT: movl %esi, 16(%ecx)
-; X32-NEXT: movl %edi, 20(%ecx)
-; X32-NEXT: movl %eax, 24(%ecx)
-; X32-NEXT: addl $12, %esp
-; X32-NEXT: popl %esi
-; X32-NEXT: popl %edi
-; X32-NEXT: popl %ebx
-; X32-NEXT: popl %ebp
-; X32-NEXT: retl
-;
-; X64-LABEL: add:
-; X64: # %bb.0:
-; X64-NEXT: movq 16(%rdi), %rax
-; X64-NEXT: movq (%rdi), %rcx
-; X64-NEXT: movq 8(%rdi), %rdx
-; X64-NEXT: movq 24(%rsi), %r8
-; X64-NEXT: addq (%rsi), %rcx
-; X64-NEXT: adcq 8(%rsi), %rdx
-; X64-NEXT: adcq 16(%rsi), %rax
-; X64-NEXT: adcq %r8, 24(%rdi)
-; X64-NEXT: movq %rax, 16(%rdi)
-; X64-NEXT: movq %rdx, 8(%rdi)
-; X64-NEXT: movq %rcx, (%rdi)
-; X64-NEXT: retq
- %a = load i256, i256* %p
- %b = load i256, i256* %q
- %c = add i256 %a, %b
- store i256 %c, i256* %p
- ret void
-}
-define void @sub(i256* %p, i256* %q) nounwind {
-; X32-LABEL: sub:
-; X32: # %bb.0:
-; X32-NEXT: pushl %ebp
-; X32-NEXT: pushl %ebx
-; X32-NEXT: pushl %edi
-; X32-NEXT: pushl %esi
-; X32-NEXT: subl $8, %esp
-; X32-NEXT: movl {{[0-9]+}}(%esp), %esi
-; X32-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-NEXT: movl 16(%ecx), %eax
-; X32-NEXT: movl 12(%ecx), %edx
-; X32-NEXT: movl 8(%ecx), %edi
-; X32-NEXT: movl (%ecx), %ebx
-; X32-NEXT: movl 4(%ecx), %ebp
-; X32-NEXT: subl (%esi), %ebx
-; X32-NEXT: sbbl 4(%esi), %ebp
-; X32-NEXT: sbbl 8(%esi), %edi
-; X32-NEXT: sbbl 12(%esi), %edx
-; X32-NEXT: movl %edx, {{[0-9]+}}(%esp) # 4-byte Spill
-; X32-NEXT: sbbl 16(%esi), %eax
-; X32-NEXT: movl %eax, (%esp) # 4-byte Spill
-; X32-NEXT: movl 20(%ecx), %edx
-; X32-NEXT: sbbl 20(%esi), %edx
-; X32-NEXT: movl 24(%ecx), %eax
-; X32-NEXT: sbbl 24(%esi), %eax
-; X32-NEXT: movl 28(%esi), %esi
-; X32-NEXT: sbbl %esi, 28(%ecx)
-; X32-NEXT: movl %edi, 8(%ecx)
-; X32-NEXT: movl %ebp, 4(%ecx)
-; X32-NEXT: movl %ebx, (%ecx)
-; X32-NEXT: movl {{[0-9]+}}(%esp), %esi # 4-byte Reload
-; X32-NEXT: movl %esi, 12(%ecx)
-; X32-NEXT: movl (%esp), %esi # 4-byte Reload
-; X32-NEXT: movl %esi, 16(%ecx)
-; X32-NEXT: movl %edx, 20(%ecx)
-; X32-NEXT: movl %eax, 24(%ecx)
-; X32-NEXT: addl $8, %esp
-; X32-NEXT: popl %esi
-; X32-NEXT: popl %edi
-; X32-NEXT: popl %ebx
-; X32-NEXT: popl %ebp
-; X32-NEXT: retl
-;
-; X64-LABEL: sub:
-; X64: # %bb.0:
-; X64-NEXT: movq 16(%rdi), %rax
-; X64-NEXT: movq (%rdi), %rcx
-; X64-NEXT: movq 8(%rdi), %rdx
-; X64-NEXT: movq 24(%rsi), %r8
-; X64-NEXT: subq (%rsi), %rcx
-; X64-NEXT: sbbq 8(%rsi), %rdx
-; X64-NEXT: sbbq 16(%rsi), %rax
-; X64-NEXT: sbbq %r8, 24(%rdi)
-; X64-NEXT: movq %rax, 16(%rdi)
-; X64-NEXT: movq %rdx, 8(%rdi)
-; X64-NEXT: movq %rcx, (%rdi)
-; X64-NEXT: retq
- %a = load i256, i256* %p
- %b = load i256, i256* %q
- %c = sub i256 %a, %b
- store i256 %c, i256* %p
- ret void
-}
diff --git a/test/CodeGen/X86/insertelement-shuffle.ll b/test/CodeGen/X86/insertelement-shuffle.ll
index 705ceba9487..c0177ad7a9a 100644
--- a/test/CodeGen/X86/insertelement-shuffle.ll
+++ b/test/CodeGen/X86/insertelement-shuffle.ll
@@ -103,14 +103,9 @@ define <8 x i64> @insert_subvector_into_undef(i32 %x0, i32 %x1) nounwind {
; X32_AVX256-NEXT: subl $8, %esp
; X32_AVX256-NEXT: vmovsd {{.*#+}} xmm0 = mem[0],zero
; X32_AVX256-NEXT: vmovlps %xmm0, (%esp)
-; X32_AVX256-NEXT: movl (%esp), %eax
-; X32_AVX256-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32_AVX256-NEXT: vmovd %eax, %xmm0
-; X32_AVX256-NEXT: vpinsrd $1, %ecx, %xmm0, %xmm0
-; X32_AVX256-NEXT: vpinsrd $2, %eax, %xmm0, %xmm0
-; X32_AVX256-NEXT: vpinsrd $3, %ecx, %xmm0, %xmm0
-; X32_AVX256-NEXT: vinserti128 $1, %xmm0, %ymm0, %ymm0
-; X32_AVX256-NEXT: vmovdqa %ymm0, %ymm1
+; X32_AVX256-NEXT: vmovsd {{.*#+}} xmm0 = mem[0],zero
+; X32_AVX256-NEXT: vbroadcastsd %xmm0, %ymm0
+; X32_AVX256-NEXT: vmovaps %ymm0, %ymm1
; X32_AVX256-NEXT: movl %ebp, %esp
; X32_AVX256-NEXT: popl %ebp
; X32_AVX256-NEXT: retl
diff --git a/test/CodeGen/X86/masked_memop.ll b/test/CodeGen/X86/masked_memop.ll
index 82f097e4e0f..33cb5e2f235 100644
--- a/test/CodeGen/X86/masked_memop.ll
+++ b/test/CodeGen/X86/masked_memop.ll
@@ -1199,8 +1199,7 @@ define <8 x double> @load_one_mask_bit_set5(<8 x double>* %addr, <8 x double> %v
; AVX-LABEL: load_one_mask_bit_set5:
; AVX: ## %bb.0:
; AVX-NEXT: vextractf128 $1, %ymm1, %xmm2
-; AVX-NEXT: vmovsd {{.*#+}} xmm3 = mem[0],zero
-; AVX-NEXT: vmovlhps {{.*#+}} xmm2 = xmm2[0],xmm3[0]
+; AVX-NEXT: vmovhpd {{.*#+}} xmm2 = xmm2[0],mem[0]
; AVX-NEXT: vinsertf128 $1, %xmm2, %ymm1, %ymm1
; AVX-NEXT: retq
;
diff --git a/test/CodeGen/X86/merge-consecutive-stores.ll b/test/CodeGen/X86/merge-consecutive-stores.ll
index af5fb478e52..4f511ef99e5 100644
--- a/test/CodeGen/X86/merge-consecutive-stores.ll
+++ b/test/CodeGen/X86/merge-consecutive-stores.ll
@@ -10,12 +10,11 @@ define i32 @foo (i64* %so) nounwind uwtable ssp {
; CHECK-NEXT: movl {{[0-9]+}}(%esp), %eax
; CHECK-NEXT: movl $0, 28(%eax)
; CHECK-NEXT: movl $0, 24(%eax)
-; CHECK-NEXT: movl 20(%eax), %ecx
-; CHECK-NEXT: movl $0, 20(%eax)
-; CHECK-NEXT: xorl %edx, %edx
-; CHECK-NEXT: cmpl 16(%eax), %edx
+; CHECK-NEXT: xorl %ecx, %ecx
+; CHECK-NEXT: cmpl 16(%eax), %ecx
; CHECK-NEXT: movl $0, 16(%eax)
-; CHECK-NEXT: sbbl %ecx, %edx
+; CHECK-NEXT: sbbl 20(%eax), %ecx
+; CHECK-NEXT: movl $0, 20(%eax)
; CHECK-NEXT: setl %al
; CHECK-NEXT: movzbl %al, %eax
; CHECK-NEXT: negl %eax
diff --git a/test/CodeGen/X86/nontemporal.ll b/test/CodeGen/X86/nontemporal.ll
index f53982a8542..472c3e4774c 100644
--- a/test/CodeGen/X86/nontemporal.ll
+++ b/test/CodeGen/X86/nontemporal.ll
@@ -13,36 +13,35 @@ define i32 @f(<4 x float> %A, i8* %B, <2 x double> %C, i32 %D, <2 x i64> %E, <4
; X32-SSE-NEXT: andl $-16, %esp
; X32-SSE-NEXT: subl $16, %esp
; X32-SSE-NEXT: movsd {{.*#+}} xmm3 = mem[0],zero
-; X32-SSE-NEXT: movl 12(%ebp), %eax
+; X32-SSE-NEXT: movl 12(%ebp), %ecx
; X32-SSE-NEXT: movdqa 56(%ebp), %xmm4
; X32-SSE-NEXT: movdqa 40(%ebp), %xmm5
; X32-SSE-NEXT: movdqa 24(%ebp), %xmm6
-; X32-SSE-NEXT: movl 8(%ebp), %edx
-; X32-SSE-NEXT: movl 80(%ebp), %ecx
-; X32-SSE-NEXT: movl (%ecx), %esi
+; X32-SSE-NEXT: movl 8(%ebp), %esi
+; X32-SSE-NEXT: movl 80(%ebp), %edx
+; X32-SSE-NEXT: movl (%edx), %eax
; X32-SSE-NEXT: addps {{\.LCPI.*}}, %xmm0
-; X32-SSE-NEXT: movntps %xmm0, (%edx)
+; X32-SSE-NEXT: movntps %xmm0, (%esi)
; X32-SSE-NEXT: paddq {{\.LCPI.*}}, %xmm2
-; X32-SSE-NEXT: addl (%ecx), %esi
-; X32-SSE-NEXT: movntdq %xmm2, (%edx)
+; X32-SSE-NEXT: addl (%edx), %eax
+; X32-SSE-NEXT: movntdq %xmm2, (%esi)
; X32-SSE-NEXT: addpd {{\.LCPI.*}}, %xmm1
-; X32-SSE-NEXT: addl (%ecx), %esi
-; X32-SSE-NEXT: movntpd %xmm1, (%edx)
+; X32-SSE-NEXT: addl (%edx), %eax
+; X32-SSE-NEXT: movntpd %xmm1, (%esi)
; X32-SSE-NEXT: paddd {{\.LCPI.*}}, %xmm6
-; X32-SSE-NEXT: addl (%ecx), %esi
-; X32-SSE-NEXT: movntdq %xmm6, (%edx)
+; X32-SSE-NEXT: addl (%edx), %eax
+; X32-SSE-NEXT: movntdq %xmm6, (%esi)
; X32-SSE-NEXT: paddw {{\.LCPI.*}}, %xmm5
-; X32-SSE-NEXT: addl (%ecx), %esi
-; X32-SSE-NEXT: movntdq %xmm5, (%edx)
+; X32-SSE-NEXT: addl (%edx), %eax
+; X32-SSE-NEXT: movntdq %xmm5, (%esi)
; X32-SSE-NEXT: paddb {{\.LCPI.*}}, %xmm4
-; X32-SSE-NEXT: addl (%ecx), %esi
-; X32-SSE-NEXT: movntdq %xmm4, (%edx)
-; X32-SSE-NEXT: addl (%ecx), %esi
-; X32-SSE-NEXT: movntil %eax, (%edx)
-; X32-SSE-NEXT: movl (%ecx), %eax
-; X32-SSE-NEXT: addl %esi, %eax
-; X32-SSE-NEXT: movsd %xmm3, (%edx)
-; X32-SSE-NEXT: addl (%ecx), %eax
+; X32-SSE-NEXT: addl (%edx), %eax
+; X32-SSE-NEXT: movntdq %xmm4, (%esi)
+; X32-SSE-NEXT: addl (%edx), %eax
+; X32-SSE-NEXT: movntil %ecx, (%esi)
+; X32-SSE-NEXT: addl (%edx), %eax
+; X32-SSE-NEXT: movsd %xmm3, (%esi)
+; X32-SSE-NEXT: addl (%edx), %eax
; X32-SSE-NEXT: leal -4(%ebp), %esp
; X32-SSE-NEXT: popl %esi
; X32-SSE-NEXT: popl %ebp
@@ -56,36 +55,35 @@ define i32 @f(<4 x float> %A, i8* %B, <2 x double> %C, i32 %D, <2 x i64> %E, <4
; X32-AVX-NEXT: andl $-16, %esp
; X32-AVX-NEXT: subl $16, %esp
; X32-AVX-NEXT: vmovsd {{.*#+}} xmm3 = mem[0],zero
-; X32-AVX-NEXT: movl 12(%ebp), %eax
+; X32-AVX-NEXT: movl 12(%ebp), %ecx
; X32-AVX-NEXT: vmovdqa 56(%ebp), %xmm4
; X32-AVX-NEXT: vmovdqa 40(%ebp), %xmm5
; X32-AVX-NEXT: vmovdqa 24(%ebp), %xmm6
-; X32-AVX-NEXT: movl 8(%ebp), %ecx
-; X32-AVX-NEXT: movl 80(%ebp), %edx
-; X32-AVX-NEXT: movl (%edx), %esi
+; X32-AVX-NEXT: movl 8(%ebp), %edx
+; X32-AVX-NEXT: movl 80(%ebp), %esi
+; X32-AVX-NEXT: movl (%esi), %eax
; X32-AVX-NEXT: vaddps {{\.LCPI.*}}, %xmm0, %xmm0
-; X32-AVX-NEXT: vmovntps %xmm0, (%ecx)
+; X32-AVX-NEXT: vmovntps %xmm0, (%edx)
; X32-AVX-NEXT: vpaddq {{\.LCPI.*}}, %xmm2, %xmm0
-; X32-AVX-NEXT: addl (%edx), %esi
-; X32-AVX-NEXT: vmovntdq %xmm0, (%ecx)
+; X32-AVX-NEXT: addl (%esi), %eax
+; X32-AVX-NEXT: vmovntdq %xmm0, (%edx)
; X32-AVX-NEXT: vaddpd {{\.LCPI.*}}, %xmm1, %xmm0
-; X32-AVX-NEXT: addl (%edx), %esi
-; X32-AVX-NEXT: vmovntpd %xmm0, (%ecx)
+; X32-AVX-NEXT: addl (%esi), %eax
+; X32-AVX-NEXT: vmovntpd %xmm0, (%edx)
; X32-AVX-NEXT: vpaddd {{\.LCPI.*}}, %xmm6, %xmm0
-; X32-AVX-NEXT: addl (%edx), %esi
-; X32-AVX-NEXT: vmovntdq %xmm0, (%ecx)
+; X32-AVX-NEXT: addl (%esi), %eax
+; X32-AVX-NEXT: vmovntdq %xmm0, (%edx)
; X32-AVX-NEXT: vpaddw {{\.LCPI.*}}, %xmm5, %xmm0
-; X32-AVX-NEXT: addl (%edx), %esi
-; X32-AVX-NEXT: vmovntdq %xmm0, (%ecx)
+; X32-AVX-NEXT: addl (%esi), %eax
+; X32-AVX-NEXT: vmovntdq %xmm0, (%edx)
; X32-AVX-NEXT: vpaddb {{\.LCPI.*}}, %xmm4, %xmm0
-; X32-AVX-NEXT: addl (%edx), %esi
-; X32-AVX-NEXT: vmovntdq %xmm0, (%ecx)
-; X32-AVX-NEXT: addl (%edx), %esi
-; X32-AVX-NEXT: movntil %eax, (%ecx)
-; X32-AVX-NEXT: movl (%edx), %eax
-; X32-AVX-NEXT: addl %esi, %eax
-; X32-AVX-NEXT: vmovsd %xmm3, (%ecx)
-; X32-AVX-NEXT: addl (%edx), %eax
+; X32-AVX-NEXT: addl (%esi), %eax
+; X32-AVX-NEXT: vmovntdq %xmm0, (%edx)
+; X32-AVX-NEXT: addl (%esi), %eax
+; X32-AVX-NEXT: movntil %ecx, (%edx)
+; X32-AVX-NEXT: addl (%esi), %eax
+; X32-AVX-NEXT: vmovsd %xmm3, (%edx)
+; X32-AVX-NEXT: addl (%esi), %eax
; X32-AVX-NEXT: leal -4(%ebp), %esp
; X32-AVX-NEXT: popl %esi
; X32-AVX-NEXT: popl %ebp
diff --git a/test/CodeGen/X86/pr36274.ll b/test/CodeGen/X86/pr36274.ll
new file mode 100644
index 00000000000..97b958c6b68
--- /dev/null
+++ b/test/CodeGen/X86/pr36274.ll
@@ -0,0 +1,33 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc < %s -mtriple=i386-unknown-linux-gnu | FileCheck %s
+
+; This tests is checking for a case where the x86 load-op-store fusion
+; misses a dependence between the fused load and a non-fused operand
+; to the load causing a cycle. Here the dependence in question comes
+; from the carry in input of the adcl.
+
+@vx = external local_unnamed_addr global <2 x i32>, align 8
+
+define void @pr36274(i32* %somewhere) {
+; CHECK-LABEL: pr36274:
+; CHECK: # %bb.0:
+; CHECK-NEXT: movl vx+4, %eax
+; CHECK-NEXT: addl $1, vx
+; CHECK-NEXT: adcl $0, %eax
+; CHECK-NEXT: movl %eax, vx+4
+; CHECK-NEXT: retl
+ %a0 = getelementptr <2 x i32>, <2 x i32>* @vx, i32 0, i32 0
+ %a1 = getelementptr <2 x i32>, <2 x i32>* @vx, i32 0, i32 1
+ %x1 = load volatile i32, i32* %a1, align 4
+ %x0 = load volatile i32, i32* %a0, align 8
+ %vx0 = insertelement <2 x i32> undef, i32 %x0, i32 0
+ %vx1 = insertelement <2 x i32> %vx0, i32 %x1, i32 1
+ %x = bitcast <2 x i32> %vx1 to i64
+ %add = add i64 %x, 1
+ %vadd = bitcast i64 %add to <2 x i32>
+ %vx1_0 = extractelement <2 x i32> %vadd, i32 0
+ %vx1_1 = extractelement <2 x i32> %vadd, i32 1
+ store i32 %vx1_0, i32* %a0, align 8
+ store i32 %vx1_1, i32* %a1, align 4
+ ret void
+}
diff --git a/test/CodeGen/X86/pr36312.ll b/test/CodeGen/X86/pr36312.ll
new file mode 100644
index 00000000000..64048511ac7
--- /dev/null
+++ b/test/CodeGen/X86/pr36312.ll
@@ -0,0 +1,35 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc < %s -mtriple=x86_64-unknown-linux-gnu | FileCheck %s
+
+%struct.anon = type { i32, i32 }
+
+@c = common global %struct.anon zeroinitializer, align 4
+@d = local_unnamed_addr global %struct.anon* @c, align 8
+@a = common local_unnamed_addr global i32 0, align 4
+@b = common local_unnamed_addr global i32 0, align 4
+
+; Function Attrs: norecurse nounwind uwtable
+define void @g() local_unnamed_addr #0 {
+; CHECK-LABEL: g:
+; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: movq {{.*}}(%rip), %rax
+; CHECK-NEXT: movl 4(%rax), %eax
+; CHECK-NEXT: xorl %ecx, %ecx
+; CHECK-NEXT: incl {{.*}}(%rip)
+; CHECK-NEXT: setne %cl
+; CHECK-NEXT: addl %eax, %ecx
+; CHECK-NEXT: movl %ecx, {{.*}}(%rip)
+; CHECK-NEXT: retq
+entry:
+ %0 = load %struct.anon*, %struct.anon** @d, align 8
+ %y = getelementptr inbounds %struct.anon, %struct.anon* %0, i64 0, i32 1
+ %1 = load i32, i32* %y, align 4
+ %2 = load i32, i32* @b, align 4
+ %inc = add nsw i32 %2, 1
+ store i32 %inc, i32* @b, align 4
+ %tobool = icmp ne i32 %inc, 0
+ %land.ext = zext i1 %tobool to i32
+ %add = add nsw i32 %1, %land.ext
+ store i32 %add, i32* @a, align 4
+ ret void
+}
diff --git a/test/CodeGen/X86/store_op_load_fold2.ll b/test/CodeGen/X86/store_op_load_fold2.ll
index f47d87f4bb8..674b8d8f938 100644
--- a/test/CodeGen/X86/store_op_load_fold2.ll
+++ b/test/CodeGen/X86/store_op_load_fold2.ll
@@ -17,14 +17,14 @@ cond_true2732.preheader: ; preds = %entry
store i64 %tmp2676.us.us, i64* %tmp2666
ret i32 0
-; INTEL: and {{e..}}, dword ptr [360]
-; INTEL: and dword ptr [356], {{e..}}
-; FIXME: mov dword ptr [360], {{e..}}
+; INTEL: and {{e..}}, dword ptr [356]
+; INTEL: and dword ptr [360], {{e..}}
+; FIXME: mov dword ptr [356], {{e..}}
; The above line comes out as 'mov 360, eax', but when the register is ecx it works?
-; ATT: andl 360, %{{e..}}
-; ATT: andl %{{e..}}, 356
-; ATT: movl %{{e..}}, 360
+; ATT: andl 356, %{{e..}}
+; ATT: andl %{{e..}}, 360
+; ATT: movl %{{e..}}, 356
}
diff --git a/test/CodeGen/X86/subvector-broadcast.ll b/test/CodeGen/X86/subvector-broadcast.ll
deleted file mode 100644
index 33cf2f453ba..00000000000
--- a/test/CodeGen/X86/subvector-broadcast.ll
+++ /dev/null
@@ -1,1683 +0,0 @@
-; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
-; RUN: llc < %s -mtriple=i686-unknown-unknown -mattr=+avx | FileCheck %s --check-prefix=X32 --check-prefix=X32-AVX --check-prefix=X32-AVX1
-; RUN: llc < %s -mtriple=i686-unknown-unknown -mattr=+avx2 | FileCheck %s --check-prefix=X32 --check-prefix=X32-AVX --check-prefix=X32-AVX2
-; RUN: llc < %s -mtriple=i686-unknown-unknown -mattr=+avx512vl | FileCheck %s --check-prefix=X32 --check-prefix=X32-AVX512 --check-prefix=X32-AVX512F
-; RUN: llc < %s -mtriple=i686-unknown-unknown -mattr=+avx512bw,+avx512vl | FileCheck %s --check-prefix=X32 --check-prefix=X32-AVX512 --check-prefix=X32-AVX512BW
-; RUN: llc < %s -mtriple=i686-unknown-unknown -mattr=+avx512dq,+avx512vl | FileCheck %s --check-prefix=X32 --check-prefix=X32-AVX512 --check-prefix=X32-AVX512DQ
-; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx | FileCheck %s --check-prefix=X64 --check-prefix=X64-AVX --check-prefix=X64-AVX1
-; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx2 | FileCheck %s --check-prefix=X64 --check-prefix=X64-AVX --check-prefix=X64-AVX2
-; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx512vl | FileCheck %s --check-prefix=X64 --check-prefix=X64-AVX512 --check-prefix=X64-AVX512F
-; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx512bw,+avx512vl | FileCheck %s --check-prefix=X64 --check-prefix=X64-AVX512 --check-prefix=X64-AVX512BW
-; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx512dq,+avx512vl | FileCheck %s --check-prefix=X64 --check-prefix=X64-AVX512 --check-prefix=X64-AVX512DQ
-
-;
-; Subvector Load + Broadcast
-;
-
-define <4 x double> @test_broadcast_2f64_4f64(<2 x double> *%p) nounwind {
-; X32-LABEL: test_broadcast_2f64_4f64:
-; X32: # %bb.0:
-; X32-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-NEXT: retl
-;
-; X64-LABEL: test_broadcast_2f64_4f64:
-; X64: # %bb.0:
-; X64-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-NEXT: retq
- %1 = load <2 x double>, <2 x double> *%p
- %2 = shufflevector <2 x double> %1, <2 x double> undef, <4 x i32> <i32 0, i32 1, i32 0, i32 1>
- ret <4 x double> %2
-}
-
-define <8 x double> @test_broadcast_2f64_8f64(<2 x double> *%p) nounwind {
-; X32-AVX-LABEL: test_broadcast_2f64_8f64:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX-NEXT: vmovdqa %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512-LABEL: test_broadcast_2f64_8f64:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512-NEXT: vbroadcastf32x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3]
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX-LABEL: test_broadcast_2f64_8f64:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX-NEXT: vmovdqa %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512-LABEL: test_broadcast_2f64_8f64:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: vbroadcastf32x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3]
-; X64-AVX512-NEXT: retq
- %1 = load <2 x double>, <2 x double> *%p
- %2 = shufflevector <2 x double> %1, <2 x double> undef, <8 x i32> <i32 0, i32 1, i32 0, i32 1, i32 0, i32 1, i32 0, i32 1>
- ret <8 x double> %2
-}
-
-define <8 x double> @test_broadcast_4f64_8f64(<4 x double> *%p) nounwind {
-; X32-AVX-LABEL: test_broadcast_4f64_8f64:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX-NEXT: vmovaps (%eax), %ymm0
-; X32-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512-LABEL: test_broadcast_4f64_8f64:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512-NEXT: vbroadcastf64x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3]
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX-LABEL: test_broadcast_4f64_8f64:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vmovaps (%rdi), %ymm0
-; X64-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512-LABEL: test_broadcast_4f64_8f64:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: vbroadcastf64x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3]
-; X64-AVX512-NEXT: retq
- %1 = load <4 x double>, <4 x double> *%p
- %2 = shufflevector <4 x double> %1, <4 x double> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
- ret <8 x double> %2
-}
-
-define <4 x i64> @test_broadcast_2i64_4i64(<2 x i64> *%p) nounwind {
-; X32-AVX-LABEL: test_broadcast_2i64_4i64:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512-LABEL: test_broadcast_2i64_4i64:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512-NEXT: vbroadcasti128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX-LABEL: test_broadcast_2i64_4i64:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512-LABEL: test_broadcast_2i64_4i64:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: vbroadcasti128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX512-NEXT: retq
- %1 = load <2 x i64>, <2 x i64> *%p
- %2 = shufflevector <2 x i64> %1, <2 x i64> undef, <4 x i32> <i32 0, i32 1, i32 0, i32 1>
- ret <4 x i64> %2
-}
-
-define <8 x i64> @test_broadcast_2i64_8i64(<2 x i64> *%p) nounwind {
-; X32-AVX1-LABEL: test_broadcast_2i64_8i64:
-; X32-AVX1: # %bb.0:
-; X32-AVX1-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX1-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX1-NEXT: vmovdqa %ymm0, %ymm1
-; X32-AVX1-NEXT: retl
-;
-; X32-AVX2-LABEL: test_broadcast_2i64_8i64:
-; X32-AVX2: # %bb.0:
-; X32-AVX2-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX2-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX2-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX2-NEXT: retl
-;
-; X32-AVX512-LABEL: test_broadcast_2i64_8i64:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512-NEXT: vbroadcasti32x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3]
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX1-LABEL: test_broadcast_2i64_8i64:
-; X64-AVX1: # %bb.0:
-; X64-AVX1-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX1-NEXT: vmovdqa %ymm0, %ymm1
-; X64-AVX1-NEXT: retq
-;
-; X64-AVX2-LABEL: test_broadcast_2i64_8i64:
-; X64-AVX2: # %bb.0:
-; X64-AVX2-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX2-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX2-NEXT: retq
-;
-; X64-AVX512-LABEL: test_broadcast_2i64_8i64:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: vbroadcasti32x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3]
-; X64-AVX512-NEXT: retq
- %1 = load <2 x i64>, <2 x i64> *%p
- %2 = shufflevector <2 x i64> %1, <2 x i64> undef, <8 x i32> <i32 0, i32 1, i32 0, i32 1, i32 0, i32 1, i32 0, i32 1>
- ret <8 x i64> %2
-}
-
-define <8 x i64> @test_broadcast_4i64_8i64(<4 x i64> *%p) nounwind {
-; X32-AVX-LABEL: test_broadcast_4i64_8i64:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX-NEXT: vmovaps (%eax), %ymm0
-; X32-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512-LABEL: test_broadcast_4i64_8i64:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512-NEXT: vbroadcasti64x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3]
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX-LABEL: test_broadcast_4i64_8i64:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vmovaps (%rdi), %ymm0
-; X64-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512-LABEL: test_broadcast_4i64_8i64:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: vbroadcasti64x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3]
-; X64-AVX512-NEXT: retq
- %1 = load <4 x i64>, <4 x i64> *%p
- %2 = shufflevector <4 x i64> %1, <4 x i64> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
- ret <8 x i64> %2
-}
-
-define <8 x float> @test_broadcast_4f32_8f32(<4 x float> *%p) nounwind {
-; X32-LABEL: test_broadcast_4f32_8f32:
-; X32: # %bb.0:
-; X32-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-NEXT: retl
-;
-; X64-LABEL: test_broadcast_4f32_8f32:
-; X64: # %bb.0:
-; X64-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-NEXT: retq
- %1 = load <4 x float>, <4 x float> *%p
- %2 = shufflevector <4 x float> %1, <4 x float> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
- ret <8 x float> %2
-}
-
-define <16 x float> @test_broadcast_4f32_16f32(<4 x float> *%p) nounwind {
-; X32-AVX-LABEL: test_broadcast_4f32_16f32:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX-NEXT: vmovdqa %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512-LABEL: test_broadcast_4f32_16f32:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512-NEXT: vbroadcastf32x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3]
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX-LABEL: test_broadcast_4f32_16f32:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX-NEXT: vmovdqa %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512-LABEL: test_broadcast_4f32_16f32:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: vbroadcastf32x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3]
-; X64-AVX512-NEXT: retq
- %1 = load <4 x float>, <4 x float> *%p
- %2 = shufflevector <4 x float> %1, <4 x float> undef, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
- ret <16 x float> %2
-}
-
-define <16 x float> @test_broadcast_8f32_16f32(<8 x float> *%p) nounwind {
-; X32-AVX-LABEL: test_broadcast_8f32_16f32:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX-NEXT: vmovaps (%eax), %ymm0
-; X32-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512-LABEL: test_broadcast_8f32_16f32:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512-NEXT: vbroadcastf64x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3]
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX-LABEL: test_broadcast_8f32_16f32:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vmovaps (%rdi), %ymm0
-; X64-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512-LABEL: test_broadcast_8f32_16f32:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: vbroadcastf64x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3]
-; X64-AVX512-NEXT: retq
- %1 = load <8 x float>, <8 x float> *%p
- %2 = shufflevector <8 x float> %1, <8 x float> undef, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
- ret <16 x float> %2
-}
-
-define <8 x i32> @test_broadcast_4i32_8i32(<4 x i32> *%p) nounwind {
-; X32-AVX-LABEL: test_broadcast_4i32_8i32:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512-LABEL: test_broadcast_4i32_8i32:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512-NEXT: vbroadcasti128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX-LABEL: test_broadcast_4i32_8i32:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512-LABEL: test_broadcast_4i32_8i32:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: vbroadcasti128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX512-NEXT: retq
- %1 = load <4 x i32>, <4 x i32> *%p
- %2 = shufflevector <4 x i32> %1, <4 x i32> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
- ret <8 x i32> %2
-}
-
-define <16 x i32> @test_broadcast_4i32_16i32(<4 x i32> *%p) nounwind {
-; X32-AVX1-LABEL: test_broadcast_4i32_16i32:
-; X32-AVX1: # %bb.0:
-; X32-AVX1-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX1-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX1-NEXT: vmovdqa %ymm0, %ymm1
-; X32-AVX1-NEXT: retl
-;
-; X32-AVX2-LABEL: test_broadcast_4i32_16i32:
-; X32-AVX2: # %bb.0:
-; X32-AVX2-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX2-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX2-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX2-NEXT: retl
-;
-; X32-AVX512-LABEL: test_broadcast_4i32_16i32:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512-NEXT: vbroadcasti32x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3]
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX1-LABEL: test_broadcast_4i32_16i32:
-; X64-AVX1: # %bb.0:
-; X64-AVX1-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX1-NEXT: vmovdqa %ymm0, %ymm1
-; X64-AVX1-NEXT: retq
-;
-; X64-AVX2-LABEL: test_broadcast_4i32_16i32:
-; X64-AVX2: # %bb.0:
-; X64-AVX2-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX2-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX2-NEXT: retq
-;
-; X64-AVX512-LABEL: test_broadcast_4i32_16i32:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: vbroadcasti32x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3]
-; X64-AVX512-NEXT: retq
- %1 = load <4 x i32>, <4 x i32> *%p
- %2 = shufflevector <4 x i32> %1, <4 x i32> undef, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
- ret <16 x i32> %2
-}
-
-define <16 x i32> @test_broadcast_8i32_16i32(<8 x i32> *%p) nounwind {
-; X32-AVX-LABEL: test_broadcast_8i32_16i32:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX-NEXT: vmovaps (%eax), %ymm0
-; X32-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512-LABEL: test_broadcast_8i32_16i32:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512-NEXT: vbroadcasti64x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3]
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX-LABEL: test_broadcast_8i32_16i32:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vmovaps (%rdi), %ymm0
-; X64-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512-LABEL: test_broadcast_8i32_16i32:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: vbroadcasti64x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3]
-; X64-AVX512-NEXT: retq
- %1 = load <8 x i32>, <8 x i32> *%p
- %2 = shufflevector <8 x i32> %1, <8 x i32> undef, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
- ret <16 x i32> %2
-}
-
-define <16 x i16> @test_broadcast_8i16_16i16(<8 x i16> *%p) nounwind {
-; X32-AVX-LABEL: test_broadcast_8i16_16i16:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512-LABEL: test_broadcast_8i16_16i16:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512-NEXT: vbroadcasti128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX-LABEL: test_broadcast_8i16_16i16:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512-LABEL: test_broadcast_8i16_16i16:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: vbroadcasti128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX512-NEXT: retq
- %1 = load <8 x i16>, <8 x i16> *%p
- %2 = shufflevector <8 x i16> %1, <8 x i16> undef, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
- ret <16 x i16> %2
-}
-
-define <32 x i16> @test_broadcast_8i16_32i16(<8 x i16> *%p) nounwind {
-; X32-AVX1-LABEL: test_broadcast_8i16_32i16:
-; X32-AVX1: # %bb.0:
-; X32-AVX1-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX1-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX1-NEXT: vmovdqa %ymm0, %ymm1
-; X32-AVX1-NEXT: retl
-;
-; X32-AVX2-LABEL: test_broadcast_8i16_32i16:
-; X32-AVX2: # %bb.0:
-; X32-AVX2-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX2-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX2-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX2-NEXT: retl
-;
-; X32-AVX512F-LABEL: test_broadcast_8i16_32i16:
-; X32-AVX512F: # %bb.0:
-; X32-AVX512F-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512F-NEXT: vbroadcasti128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX512F-NEXT: vmovdqa %ymm0, %ymm1
-; X32-AVX512F-NEXT: retl
-;
-; X32-AVX512BW-LABEL: test_broadcast_8i16_32i16:
-; X32-AVX512BW: # %bb.0:
-; X32-AVX512BW-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512BW-NEXT: vbroadcasti32x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3]
-; X32-AVX512BW-NEXT: retl
-;
-; X32-AVX512DQ-LABEL: test_broadcast_8i16_32i16:
-; X32-AVX512DQ: # %bb.0:
-; X32-AVX512DQ-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512DQ-NEXT: vbroadcasti128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX512DQ-NEXT: vmovdqa %ymm0, %ymm1
-; X32-AVX512DQ-NEXT: retl
-;
-; X64-AVX1-LABEL: test_broadcast_8i16_32i16:
-; X64-AVX1: # %bb.0:
-; X64-AVX1-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX1-NEXT: vmovdqa %ymm0, %ymm1
-; X64-AVX1-NEXT: retq
-;
-; X64-AVX2-LABEL: test_broadcast_8i16_32i16:
-; X64-AVX2: # %bb.0:
-; X64-AVX2-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX2-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX2-NEXT: retq
-;
-; X64-AVX512F-LABEL: test_broadcast_8i16_32i16:
-; X64-AVX512F: # %bb.0:
-; X64-AVX512F-NEXT: vbroadcasti128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX512F-NEXT: vmovdqa %ymm0, %ymm1
-; X64-AVX512F-NEXT: retq
-;
-; X64-AVX512BW-LABEL: test_broadcast_8i16_32i16:
-; X64-AVX512BW: # %bb.0:
-; X64-AVX512BW-NEXT: vbroadcasti32x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3]
-; X64-AVX512BW-NEXT: retq
-;
-; X64-AVX512DQ-LABEL: test_broadcast_8i16_32i16:
-; X64-AVX512DQ: # %bb.0:
-; X64-AVX512DQ-NEXT: vbroadcasti128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX512DQ-NEXT: vmovdqa %ymm0, %ymm1
-; X64-AVX512DQ-NEXT: retq
- %1 = load <8 x i16>, <8 x i16> *%p
- %2 = shufflevector <8 x i16> %1, <8 x i16> undef, <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
- ret <32 x i16> %2
-}
-
-define <32 x i16> @test_broadcast_16i16_32i16(<16 x i16> *%p) nounwind {
-; X32-AVX-LABEL: test_broadcast_16i16_32i16:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX-NEXT: vmovaps (%eax), %ymm0
-; X32-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512F-LABEL: test_broadcast_16i16_32i16:
-; X32-AVX512F: # %bb.0:
-; X32-AVX512F-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512F-NEXT: vmovaps (%eax), %ymm0
-; X32-AVX512F-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX512F-NEXT: retl
-;
-; X32-AVX512BW-LABEL: test_broadcast_16i16_32i16:
-; X32-AVX512BW: # %bb.0:
-; X32-AVX512BW-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512BW-NEXT: vbroadcasti64x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3]
-; X32-AVX512BW-NEXT: retl
-;
-; X32-AVX512DQ-LABEL: test_broadcast_16i16_32i16:
-; X32-AVX512DQ: # %bb.0:
-; X32-AVX512DQ-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512DQ-NEXT: vmovaps (%eax), %ymm0
-; X32-AVX512DQ-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX512DQ-NEXT: retl
-;
-; X64-AVX-LABEL: test_broadcast_16i16_32i16:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vmovaps (%rdi), %ymm0
-; X64-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512F-LABEL: test_broadcast_16i16_32i16:
-; X64-AVX512F: # %bb.0:
-; X64-AVX512F-NEXT: vmovaps (%rdi), %ymm0
-; X64-AVX512F-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX512F-NEXT: retq
-;
-; X64-AVX512BW-LABEL: test_broadcast_16i16_32i16:
-; X64-AVX512BW: # %bb.0:
-; X64-AVX512BW-NEXT: vbroadcasti64x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3]
-; X64-AVX512BW-NEXT: retq
-;
-; X64-AVX512DQ-LABEL: test_broadcast_16i16_32i16:
-; X64-AVX512DQ: # %bb.0:
-; X64-AVX512DQ-NEXT: vmovaps (%rdi), %ymm0
-; X64-AVX512DQ-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX512DQ-NEXT: retq
- %1 = load <16 x i16>, <16 x i16> *%p
- %2 = shufflevector <16 x i16> %1, <16 x i16> undef, <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15>
- ret <32 x i16> %2
-}
-
-define <32 x i8> @test_broadcast_16i8_32i8(<16 x i8> *%p) nounwind {
-; X32-AVX-LABEL: test_broadcast_16i8_32i8:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512-LABEL: test_broadcast_16i8_32i8:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512-NEXT: vbroadcasti128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX-LABEL: test_broadcast_16i8_32i8:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512-LABEL: test_broadcast_16i8_32i8:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: vbroadcasti128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX512-NEXT: retq
- %1 = load <16 x i8>, <16 x i8> *%p
- %2 = shufflevector <16 x i8> %1, <16 x i8> undef, <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15>
- ret <32 x i8> %2
-}
-
-define <64 x i8> @test_broadcast_16i8_64i8(<16 x i8> *%p) nounwind {
-; X32-AVX1-LABEL: test_broadcast_16i8_64i8:
-; X32-AVX1: # %bb.0:
-; X32-AVX1-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX1-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX1-NEXT: vmovdqa %ymm0, %ymm1
-; X32-AVX1-NEXT: retl
-;
-; X32-AVX2-LABEL: test_broadcast_16i8_64i8:
-; X32-AVX2: # %bb.0:
-; X32-AVX2-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX2-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX2-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX2-NEXT: retl
-;
-; X32-AVX512F-LABEL: test_broadcast_16i8_64i8:
-; X32-AVX512F: # %bb.0:
-; X32-AVX512F-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512F-NEXT: vbroadcasti128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX512F-NEXT: vmovdqa %ymm0, %ymm1
-; X32-AVX512F-NEXT: retl
-;
-; X32-AVX512BW-LABEL: test_broadcast_16i8_64i8:
-; X32-AVX512BW: # %bb.0:
-; X32-AVX512BW-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512BW-NEXT: vbroadcasti32x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3]
-; X32-AVX512BW-NEXT: retl
-;
-; X32-AVX512DQ-LABEL: test_broadcast_16i8_64i8:
-; X32-AVX512DQ: # %bb.0:
-; X32-AVX512DQ-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512DQ-NEXT: vbroadcasti128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X32-AVX512DQ-NEXT: vmovdqa %ymm0, %ymm1
-; X32-AVX512DQ-NEXT: retl
-;
-; X64-AVX1-LABEL: test_broadcast_16i8_64i8:
-; X64-AVX1: # %bb.0:
-; X64-AVX1-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX1-NEXT: vmovdqa %ymm0, %ymm1
-; X64-AVX1-NEXT: retq
-;
-; X64-AVX2-LABEL: test_broadcast_16i8_64i8:
-; X64-AVX2: # %bb.0:
-; X64-AVX2-NEXT: vbroadcastf128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX2-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX2-NEXT: retq
-;
-; X64-AVX512F-LABEL: test_broadcast_16i8_64i8:
-; X64-AVX512F: # %bb.0:
-; X64-AVX512F-NEXT: vbroadcasti128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX512F-NEXT: vmovdqa %ymm0, %ymm1
-; X64-AVX512F-NEXT: retq
-;
-; X64-AVX512BW-LABEL: test_broadcast_16i8_64i8:
-; X64-AVX512BW: # %bb.0:
-; X64-AVX512BW-NEXT: vbroadcasti32x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3]
-; X64-AVX512BW-NEXT: retq
-;
-; X64-AVX512DQ-LABEL: test_broadcast_16i8_64i8:
-; X64-AVX512DQ: # %bb.0:
-; X64-AVX512DQ-NEXT: vbroadcasti128 {{.*#+}} ymm0 = mem[0,1,0,1]
-; X64-AVX512DQ-NEXT: vmovdqa %ymm0, %ymm1
-; X64-AVX512DQ-NEXT: retq
- %1 = load <16 x i8>, <16 x i8> *%p
- %2 = shufflevector <16 x i8> %1, <16 x i8> undef, <64 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15>
- ret <64 x i8> %2
-}
-
-define <64 x i8> @test_broadcast_32i8_64i8(<32 x i8> *%p) nounwind {
-; X32-AVX-LABEL: test_broadcast_32i8_64i8:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX-NEXT: vmovaps (%eax), %ymm0
-; X32-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512F-LABEL: test_broadcast_32i8_64i8:
-; X32-AVX512F: # %bb.0:
-; X32-AVX512F-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512F-NEXT: vmovaps (%eax), %ymm0
-; X32-AVX512F-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX512F-NEXT: retl
-;
-; X32-AVX512BW-LABEL: test_broadcast_32i8_64i8:
-; X32-AVX512BW: # %bb.0:
-; X32-AVX512BW-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512BW-NEXT: vbroadcasti64x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3]
-; X32-AVX512BW-NEXT: retl
-;
-; X32-AVX512DQ-LABEL: test_broadcast_32i8_64i8:
-; X32-AVX512DQ: # %bb.0:
-; X32-AVX512DQ-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512DQ-NEXT: vmovaps (%eax), %ymm0
-; X32-AVX512DQ-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX512DQ-NEXT: retl
-;
-; X64-AVX-LABEL: test_broadcast_32i8_64i8:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vmovaps (%rdi), %ymm0
-; X64-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512F-LABEL: test_broadcast_32i8_64i8:
-; X64-AVX512F: # %bb.0:
-; X64-AVX512F-NEXT: vmovaps (%rdi), %ymm0
-; X64-AVX512F-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX512F-NEXT: retq
-;
-; X64-AVX512BW-LABEL: test_broadcast_32i8_64i8:
-; X64-AVX512BW: # %bb.0:
-; X64-AVX512BW-NEXT: vbroadcasti64x4 {{.*#+}} zmm0 = mem[0,1,2,3,0,1,2,3]
-; X64-AVX512BW-NEXT: retq
-;
-; X64-AVX512DQ-LABEL: test_broadcast_32i8_64i8:
-; X64-AVX512DQ: # %bb.0:
-; X64-AVX512DQ-NEXT: vmovaps (%rdi), %ymm0
-; X64-AVX512DQ-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX512DQ-NEXT: retq
- %1 = load <32 x i8>, <32 x i8> *%p
- %2 = shufflevector <32 x i8> %1, <32 x i8> undef, <64 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31>
- ret <64 x i8> %2
-}
-
-;
-; Subvector Load + Broadcast + Store
-;
-
-define <4 x double> @test_broadcast_2f64_4f64_reuse(<2 x double>* %p0, <2 x double>* %p1) {
-; X32-LABEL: test_broadcast_2f64_4f64_reuse:
-; X32: # %bb.0:
-; X32-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-NEXT: vmovaps (%ecx), %xmm0
-; X32-NEXT: vmovaps %xmm0, (%eax)
-; X32-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-NEXT: retl
-;
-; X64-LABEL: test_broadcast_2f64_4f64_reuse:
-; X64: # %bb.0:
-; X64-NEXT: vmovaps (%rdi), %xmm0
-; X64-NEXT: vmovaps %xmm0, (%rsi)
-; X64-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-NEXT: retq
- %1 = load <2 x double>, <2 x double>* %p0
- store <2 x double> %1, <2 x double>* %p1
- %2 = shufflevector <2 x double> %1, <2 x double> undef, <4 x i32> <i32 0, i32 1, i32 0, i32 1>
- ret <4 x double> %2
-}
-
-define <4 x i64> @test_broadcast_2i64_4i64_reuse(<2 x i64>* %p0, <2 x i64>* %p1) {
-; X32-LABEL: test_broadcast_2i64_4i64_reuse:
-; X32: # %bb.0:
-; X32-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-NEXT: vmovaps (%ecx), %xmm0
-; X32-NEXT: vmovaps %xmm0, (%eax)
-; X32-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-NEXT: retl
-;
-; X64-LABEL: test_broadcast_2i64_4i64_reuse:
-; X64: # %bb.0:
-; X64-NEXT: vmovaps (%rdi), %xmm0
-; X64-NEXT: vmovaps %xmm0, (%rsi)
-; X64-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-NEXT: retq
- %1 = load <2 x i64>, <2 x i64>* %p0
- store <2 x i64> %1, <2 x i64>* %p1
- %2 = shufflevector <2 x i64> %1, <2 x i64> undef, <4 x i32> <i32 0, i32 1, i32 0, i32 1>
- ret <4 x i64> %2
-}
-
-define <8 x float> @test_broadcast_4f32_8f32_reuse(<4 x float>* %p0, <4 x float>* %p1) {
-; X32-LABEL: test_broadcast_4f32_8f32_reuse:
-; X32: # %bb.0:
-; X32-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-NEXT: vmovaps (%ecx), %xmm0
-; X32-NEXT: vmovaps %xmm0, (%eax)
-; X32-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-NEXT: retl
-;
-; X64-LABEL: test_broadcast_4f32_8f32_reuse:
-; X64: # %bb.0:
-; X64-NEXT: vmovaps (%rdi), %xmm0
-; X64-NEXT: vmovaps %xmm0, (%rsi)
-; X64-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-NEXT: retq
- %1 = load <4 x float>, <4 x float>* %p0
- store <4 x float> %1, <4 x float>* %p1
- %2 = shufflevector <4 x float> %1, <4 x float> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
- ret <8 x float> %2
-}
-
-define <8 x i32> @test_broadcast_4i32_8i32_reuse(<4 x i32>* %p0, <4 x i32>* %p1) {
-; X32-LABEL: test_broadcast_4i32_8i32_reuse:
-; X32: # %bb.0:
-; X32-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-NEXT: vmovaps (%ecx), %xmm0
-; X32-NEXT: vmovaps %xmm0, (%eax)
-; X32-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-NEXT: retl
-;
-; X64-LABEL: test_broadcast_4i32_8i32_reuse:
-; X64: # %bb.0:
-; X64-NEXT: vmovaps (%rdi), %xmm0
-; X64-NEXT: vmovaps %xmm0, (%rsi)
-; X64-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-NEXT: retq
- %1 = load <4 x i32>, <4 x i32>* %p0
- store <4 x i32> %1, <4 x i32>* %p1
- %2 = shufflevector <4 x i32> %1, <4 x i32> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
- ret <8 x i32> %2
-}
-
-define <16 x i16> @test_broadcast_8i16_16i16_reuse(<8 x i16> *%p0, <8 x i16> *%p1) nounwind {
-; X32-LABEL: test_broadcast_8i16_16i16_reuse:
-; X32: # %bb.0:
-; X32-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-NEXT: vmovaps (%ecx), %xmm0
-; X32-NEXT: vmovaps %xmm0, (%eax)
-; X32-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-NEXT: retl
-;
-; X64-LABEL: test_broadcast_8i16_16i16_reuse:
-; X64: # %bb.0:
-; X64-NEXT: vmovaps (%rdi), %xmm0
-; X64-NEXT: vmovaps %xmm0, (%rsi)
-; X64-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-NEXT: retq
- %1 = load <8 x i16>, <8 x i16> *%p0
- store <8 x i16> %1, <8 x i16>* %p1
- %2 = shufflevector <8 x i16> %1, <8 x i16> undef, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
- ret <16 x i16> %2
-}
-
-define <32 x i8> @test_broadcast_16i8_32i8_reuse(<16 x i8> *%p0, <16 x i8> *%p1) nounwind {
-; X32-LABEL: test_broadcast_16i8_32i8_reuse:
-; X32: # %bb.0:
-; X32-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-NEXT: vmovaps (%ecx), %xmm0
-; X32-NEXT: vmovaps %xmm0, (%eax)
-; X32-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-NEXT: retl
-;
-; X64-LABEL: test_broadcast_16i8_32i8_reuse:
-; X64: # %bb.0:
-; X64-NEXT: vmovaps (%rdi), %xmm0
-; X64-NEXT: vmovaps %xmm0, (%rsi)
-; X64-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-NEXT: retq
- %1 = load <16 x i8>, <16 x i8> *%p0
- store <16 x i8> %1, <16 x i8>* %p1
- %2 = shufflevector <16 x i8> %1, <16 x i8> undef, <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15>
- ret <32 x i8> %2
-}
-
-;
-; Subvector Load + Broadcast with Separate Store
-;
-
-define <8 x i32> @test_broadcast_4i32_8i32_chain(<4 x i32>* %p0, <4 x float>* %p1) {
-; X32-AVX-LABEL: test_broadcast_4i32_8i32_chain:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-AVX-NEXT: vmovaps (%ecx), %xmm0
-; X32-AVX-NEXT: vxorps %xmm1, %xmm1, %xmm1
-; X32-AVX-NEXT: vmovaps %xmm1, (%eax)
-; X32-AVX-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512F-LABEL: test_broadcast_4i32_8i32_chain:
-; X32-AVX512F: # %bb.0:
-; X32-AVX512F-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512F-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-AVX512F-NEXT: vmovaps (%ecx), %xmm0
-; X32-AVX512F-NEXT: vpxor %xmm1, %xmm1, %xmm1
-; X32-AVX512F-NEXT: vmovdqa %xmm1, (%eax)
-; X32-AVX512F-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX512F-NEXT: retl
-;
-; X32-AVX512BW-LABEL: test_broadcast_4i32_8i32_chain:
-; X32-AVX512BW: # %bb.0:
-; X32-AVX512BW-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512BW-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-AVX512BW-NEXT: vmovaps (%ecx), %xmm0
-; X32-AVX512BW-NEXT: vpxor %xmm1, %xmm1, %xmm1
-; X32-AVX512BW-NEXT: vmovdqa %xmm1, (%eax)
-; X32-AVX512BW-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX512BW-NEXT: retl
-;
-; X32-AVX512DQ-LABEL: test_broadcast_4i32_8i32_chain:
-; X32-AVX512DQ: # %bb.0:
-; X32-AVX512DQ-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512DQ-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-AVX512DQ-NEXT: vmovaps (%ecx), %xmm0
-; X32-AVX512DQ-NEXT: vxorps %xmm1, %xmm1, %xmm1
-; X32-AVX512DQ-NEXT: vmovaps %xmm1, (%eax)
-; X32-AVX512DQ-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX512DQ-NEXT: retl
-;
-; X64-AVX-LABEL: test_broadcast_4i32_8i32_chain:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vmovaps (%rdi), %xmm0
-; X64-AVX-NEXT: vxorps %xmm1, %xmm1, %xmm1
-; X64-AVX-NEXT: vmovaps %xmm1, (%rsi)
-; X64-AVX-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512F-LABEL: test_broadcast_4i32_8i32_chain:
-; X64-AVX512F: # %bb.0:
-; X64-AVX512F-NEXT: vmovaps (%rdi), %xmm0
-; X64-AVX512F-NEXT: vpxor %xmm1, %xmm1, %xmm1
-; X64-AVX512F-NEXT: vmovdqa %xmm1, (%rsi)
-; X64-AVX512F-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX512F-NEXT: retq
-;
-; X64-AVX512BW-LABEL: test_broadcast_4i32_8i32_chain:
-; X64-AVX512BW: # %bb.0:
-; X64-AVX512BW-NEXT: vmovaps (%rdi), %xmm0
-; X64-AVX512BW-NEXT: vpxor %xmm1, %xmm1, %xmm1
-; X64-AVX512BW-NEXT: vmovdqa %xmm1, (%rsi)
-; X64-AVX512BW-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX512BW-NEXT: retq
-;
-; X64-AVX512DQ-LABEL: test_broadcast_4i32_8i32_chain:
-; X64-AVX512DQ: # %bb.0:
-; X64-AVX512DQ-NEXT: vmovaps (%rdi), %xmm0
-; X64-AVX512DQ-NEXT: vxorps %xmm1, %xmm1, %xmm1
-; X64-AVX512DQ-NEXT: vmovaps %xmm1, (%rsi)
-; X64-AVX512DQ-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX512DQ-NEXT: retq
- %1 = load <4 x i32>, <4 x i32>* %p0
- store <4 x float> zeroinitializer, <4 x float>* %p1
- %2 = shufflevector <4 x i32> %1, <4 x i32> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
- ret <8 x i32> %2
-}
-
-define <16 x i32> @test_broadcast_4i32_16i32_chain(<4 x i32>* %p0, <4 x float>* %p1) {
-; X32-AVX-LABEL: test_broadcast_4i32_16i32_chain:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-AVX-NEXT: vmovaps (%ecx), %xmm0
-; X32-AVX-NEXT: vxorps %xmm1, %xmm1, %xmm1
-; X32-AVX-NEXT: vmovaps %xmm1, (%eax)
-; X32-AVX-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512F-LABEL: test_broadcast_4i32_16i32_chain:
-; X32-AVX512F: # %bb.0:
-; X32-AVX512F-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512F-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-AVX512F-NEXT: vmovdqa (%ecx), %xmm0
-; X32-AVX512F-NEXT: vpxor %xmm1, %xmm1, %xmm1
-; X32-AVX512F-NEXT: vmovdqa %xmm1, (%eax)
-; X32-AVX512F-NEXT: vshufi32x4 {{.*#+}} zmm0 = zmm0[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3]
-; X32-AVX512F-NEXT: retl
-;
-; X32-AVX512BW-LABEL: test_broadcast_4i32_16i32_chain:
-; X32-AVX512BW: # %bb.0:
-; X32-AVX512BW-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512BW-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-AVX512BW-NEXT: vmovdqa (%ecx), %xmm0
-; X32-AVX512BW-NEXT: vpxor %xmm1, %xmm1, %xmm1
-; X32-AVX512BW-NEXT: vmovdqa %xmm1, (%eax)
-; X32-AVX512BW-NEXT: vshufi32x4 {{.*#+}} zmm0 = zmm0[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3]
-; X32-AVX512BW-NEXT: retl
-;
-; X32-AVX512DQ-LABEL: test_broadcast_4i32_16i32_chain:
-; X32-AVX512DQ: # %bb.0:
-; X32-AVX512DQ-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX512DQ-NEXT: movl {{[0-9]+}}(%esp), %ecx
-; X32-AVX512DQ-NEXT: vmovdqa (%ecx), %xmm0
-; X32-AVX512DQ-NEXT: vxorps %xmm1, %xmm1, %xmm1
-; X32-AVX512DQ-NEXT: vmovaps %xmm1, (%eax)
-; X32-AVX512DQ-NEXT: vshufi32x4 {{.*#+}} zmm0 = zmm0[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3]
-; X32-AVX512DQ-NEXT: retl
-;
-; X64-AVX-LABEL: test_broadcast_4i32_16i32_chain:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vmovaps (%rdi), %xmm0
-; X64-AVX-NEXT: vxorps %xmm1, %xmm1, %xmm1
-; X64-AVX-NEXT: vmovaps %xmm1, (%rsi)
-; X64-AVX-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512F-LABEL: test_broadcast_4i32_16i32_chain:
-; X64-AVX512F: # %bb.0:
-; X64-AVX512F-NEXT: vmovdqa (%rdi), %xmm0
-; X64-AVX512F-NEXT: vpxor %xmm1, %xmm1, %xmm1
-; X64-AVX512F-NEXT: vmovdqa %xmm1, (%rsi)
-; X64-AVX512F-NEXT: vshufi32x4 {{.*#+}} zmm0 = zmm0[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3]
-; X64-AVX512F-NEXT: retq
-;
-; X64-AVX512BW-LABEL: test_broadcast_4i32_16i32_chain:
-; X64-AVX512BW: # %bb.0:
-; X64-AVX512BW-NEXT: vmovdqa (%rdi), %xmm0
-; X64-AVX512BW-NEXT: vpxor %xmm1, %xmm1, %xmm1
-; X64-AVX512BW-NEXT: vmovdqa %xmm1, (%rsi)
-; X64-AVX512BW-NEXT: vshufi32x4 {{.*#+}} zmm0 = zmm0[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3]
-; X64-AVX512BW-NEXT: retq
-;
-; X64-AVX512DQ-LABEL: test_broadcast_4i32_16i32_chain:
-; X64-AVX512DQ: # %bb.0:
-; X64-AVX512DQ-NEXT: vmovdqa (%rdi), %xmm0
-; X64-AVX512DQ-NEXT: vxorps %xmm1, %xmm1, %xmm1
-; X64-AVX512DQ-NEXT: vmovaps %xmm1, (%rsi)
-; X64-AVX512DQ-NEXT: vshufi32x4 {{.*#+}} zmm0 = zmm0[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3]
-; X64-AVX512DQ-NEXT: retq
- %1 = load <4 x i32>, <4 x i32>* %p0
- store <4 x float> zeroinitializer, <4 x float>* %p1
- %2 = shufflevector <4 x i32> %1, <4 x i32> undef, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
- ret <16 x i32> %2
-}
-
-;
-; subvector Load with multiple uses + broadcast
-; Fallback to the broadcast should be done
-;
-
-@ga4 = global <4 x i64> zeroinitializer, align 8
-@gb4 = global <8 x i64> zeroinitializer, align 8
-
-define void @fallback_broadcast_v4i64_to_v8i64(<4 x i64> %a, <8 x i64> %b) {
-; X32-AVX1-LABEL: fallback_broadcast_v4i64_to_v8i64:
-; X32-AVX1: # %bb.0: # %entry
-; X32-AVX1-NEXT: vextractf128 $1, %ymm0, %xmm3
-; X32-AVX1-NEXT: vmovdqa {{.*#+}} ymm4 = [1,0,2,0,3,0,4,0]
-; X32-AVX1-NEXT: vextractf128 $1, %ymm4, %xmm5
-; X32-AVX1-NEXT: vpaddq %xmm5, %xmm3, %xmm3
-; X32-AVX1-NEXT: vpaddq %xmm4, %xmm0, %xmm0
-; X32-AVX1-NEXT: vinsertf128 $1, %xmm3, %ymm0, %ymm0
-; X32-AVX1-NEXT: vextractf128 $1, %ymm2, %xmm3
-; X32-AVX1-NEXT: vpaddq %xmm5, %xmm3, %xmm3
-; X32-AVX1-NEXT: vpaddq %xmm4, %xmm2, %xmm2
-; X32-AVX1-NEXT: vinsertf128 $1, %xmm3, %ymm2, %ymm2
-; X32-AVX1-NEXT: vextractf128 $1, %ymm1, %xmm3
-; X32-AVX1-NEXT: vpaddq %xmm5, %xmm3, %xmm3
-; X32-AVX1-NEXT: vpaddq %xmm4, %xmm1, %xmm1
-; X32-AVX1-NEXT: vinsertf128 $1, %xmm3, %ymm1, %ymm1
-; X32-AVX1-NEXT: vandps %ymm4, %ymm1, %ymm1
-; X32-AVX1-NEXT: vandps %ymm4, %ymm2, %ymm2
-; X32-AVX1-NEXT: vmovups %ymm0, ga4
-; X32-AVX1-NEXT: vmovups %ymm2, gb4+32
-; X32-AVX1-NEXT: vmovups %ymm1, gb4
-; X32-AVX1-NEXT: vzeroupper
-; X32-AVX1-NEXT: retl
-;
-; X32-AVX2-LABEL: fallback_broadcast_v4i64_to_v8i64:
-; X32-AVX2: # %bb.0: # %entry
-; X32-AVX2-NEXT: vmovdqa {{.*#+}} ymm3 = [1,0,2,0,3,0,4,0]
-; X32-AVX2-NEXT: vpaddq %ymm3, %ymm0, %ymm0
-; X32-AVX2-NEXT: vpaddq %ymm3, %ymm2, %ymm2
-; X32-AVX2-NEXT: vpaddq %ymm3, %ymm1, %ymm1
-; X32-AVX2-NEXT: vpand %ymm3, %ymm1, %ymm1
-; X32-AVX2-NEXT: vpand %ymm3, %ymm2, %ymm2
-; X32-AVX2-NEXT: vmovdqu %ymm0, ga4
-; X32-AVX2-NEXT: vmovdqu %ymm2, gb4+32
-; X32-AVX2-NEXT: vmovdqu %ymm1, gb4
-; X32-AVX2-NEXT: vzeroupper
-; X32-AVX2-NEXT: retl
-;
-; X32-AVX512-LABEL: fallback_broadcast_v4i64_to_v8i64:
-; X32-AVX512: # %bb.0: # %entry
-; X32-AVX512-NEXT: vpaddq {{\.LCPI.*}}, %ymm0, %ymm0
-; X32-AVX512-NEXT: vmovdqa64 {{.*#+}} zmm2 = [1,0,2,0,3,0,4,0,1,0,2,0,3,0,4,0]
-; X32-AVX512-NEXT: vpaddq %zmm2, %zmm1, %zmm1
-; X32-AVX512-NEXT: vpandq %zmm2, %zmm1, %zmm1
-; X32-AVX512-NEXT: vmovdqu %ymm0, ga4
-; X32-AVX512-NEXT: vmovdqu64 %zmm1, gb4
-; X32-AVX512-NEXT: vzeroupper
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX1-LABEL: fallback_broadcast_v4i64_to_v8i64:
-; X64-AVX1: # %bb.0: # %entry
-; X64-AVX1-NEXT: vextractf128 $1, %ymm0, %xmm3
-; X64-AVX1-NEXT: vmovdqa {{.*#+}} xmm4 = [3,4]
-; X64-AVX1-NEXT: vpaddq %xmm4, %xmm3, %xmm3
-; X64-AVX1-NEXT: vmovdqa {{.*#+}} xmm5 = [1,2]
-; X64-AVX1-NEXT: vpaddq %xmm5, %xmm0, %xmm0
-; X64-AVX1-NEXT: vinsertf128 $1, %xmm3, %ymm0, %ymm0
-; X64-AVX1-NEXT: vmovaps {{.*#+}} ymm3 = [1,2,3,4]
-; X64-AVX1-NEXT: vextractf128 $1, %ymm2, %xmm6
-; X64-AVX1-NEXT: vpaddq %xmm4, %xmm6, %xmm6
-; X64-AVX1-NEXT: vpaddq %xmm5, %xmm2, %xmm2
-; X64-AVX1-NEXT: vinsertf128 $1, %xmm6, %ymm2, %ymm2
-; X64-AVX1-NEXT: vextractf128 $1, %ymm1, %xmm6
-; X64-AVX1-NEXT: vpaddq %xmm4, %xmm6, %xmm4
-; X64-AVX1-NEXT: vpaddq %xmm5, %xmm1, %xmm1
-; X64-AVX1-NEXT: vinsertf128 $1, %xmm4, %ymm1, %ymm1
-; X64-AVX1-NEXT: vandps %ymm3, %ymm1, %ymm1
-; X64-AVX1-NEXT: vandps %ymm3, %ymm2, %ymm2
-; X64-AVX1-NEXT: vmovups %ymm0, {{.*}}(%rip)
-; X64-AVX1-NEXT: vmovups %ymm2, gb4+{{.*}}(%rip)
-; X64-AVX1-NEXT: vmovups %ymm1, {{.*}}(%rip)
-; X64-AVX1-NEXT: vzeroupper
-; X64-AVX1-NEXT: retq
-;
-; X64-AVX2-LABEL: fallback_broadcast_v4i64_to_v8i64:
-; X64-AVX2: # %bb.0: # %entry
-; X64-AVX2-NEXT: vmovdqa {{.*#+}} ymm3 = [1,2,3,4]
-; X64-AVX2-NEXT: vpaddq %ymm3, %ymm0, %ymm0
-; X64-AVX2-NEXT: vpaddq %ymm3, %ymm2, %ymm2
-; X64-AVX2-NEXT: vpaddq %ymm3, %ymm1, %ymm1
-; X64-AVX2-NEXT: vpand %ymm3, %ymm1, %ymm1
-; X64-AVX2-NEXT: vpand %ymm3, %ymm2, %ymm2
-; X64-AVX2-NEXT: vmovdqu %ymm0, {{.*}}(%rip)
-; X64-AVX2-NEXT: vmovdqu %ymm2, gb4+{{.*}}(%rip)
-; X64-AVX2-NEXT: vmovdqu %ymm1, {{.*}}(%rip)
-; X64-AVX2-NEXT: vzeroupper
-; X64-AVX2-NEXT: retq
-;
-; X64-AVX512-LABEL: fallback_broadcast_v4i64_to_v8i64:
-; X64-AVX512: # %bb.0: # %entry
-; X64-AVX512-NEXT: vmovdqa {{.*#+}} ymm2 = [1,2,3,4]
-; X64-AVX512-NEXT: vpaddq %ymm2, %ymm0, %ymm0
-; X64-AVX512-NEXT: vinserti64x4 $1, %ymm2, %zmm2, %zmm2
-; X64-AVX512-NEXT: vpaddq %zmm2, %zmm1, %zmm1
-; X64-AVX512-NEXT: vpandq %zmm2, %zmm1, %zmm1
-; X64-AVX512-NEXT: vmovdqu %ymm0, {{.*}}(%rip)
-; X64-AVX512-NEXT: vmovdqu64 %zmm1, {{.*}}(%rip)
-; X64-AVX512-NEXT: vzeroupper
-; X64-AVX512-NEXT: retq
-entry:
- %0 = add <4 x i64> %a, <i64 1, i64 2, i64 3, i64 4>
- %1 = add <8 x i64> %b, <i64 1, i64 2, i64 3, i64 4, i64 1, i64 2, i64 3, i64 4>
- %2 = and <8 x i64> %1, <i64 1, i64 2, i64 3, i64 4, i64 1, i64 2, i64 3, i64 4>
- store <4 x i64> %0, <4 x i64>* @ga4, align 8
- store <8 x i64> %2, <8 x i64>* @gb4, align 8
- ret void
-}
-
-
-@ga2 = global <4 x double> zeroinitializer, align 8
-@gb2 = global <8 x double> zeroinitializer, align 8
-
-define void @fallback_broadcast_v4f64_to_v8f64(<4 x double> %a, <8 x double> %b) {
-; X32-AVX-LABEL: fallback_broadcast_v4f64_to_v8f64:
-; X32-AVX: # %bb.0: # %entry
-; X32-AVX-NEXT: vmovapd {{.*#+}} ymm3 = [1.000000e+00,2.000000e+00,3.000000e+00,4.000000e+00]
-; X32-AVX-NEXT: vaddpd %ymm3, %ymm0, %ymm0
-; X32-AVX-NEXT: vaddpd %ymm3, %ymm2, %ymm2
-; X32-AVX-NEXT: vaddpd %ymm3, %ymm1, %ymm1
-; X32-AVX-NEXT: vdivpd %ymm3, %ymm1, %ymm1
-; X32-AVX-NEXT: vdivpd %ymm3, %ymm2, %ymm2
-; X32-AVX-NEXT: vmovupd %ymm0, ga2
-; X32-AVX-NEXT: vmovupd %ymm2, gb2+32
-; X32-AVX-NEXT: vmovupd %ymm1, gb2
-; X32-AVX-NEXT: vzeroupper
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512-LABEL: fallback_broadcast_v4f64_to_v8f64:
-; X32-AVX512: # %bb.0: # %entry
-; X32-AVX512-NEXT: vmovapd {{.*#+}} ymm2 = [1.000000e+00,2.000000e+00,3.000000e+00,4.000000e+00]
-; X32-AVX512-NEXT: vaddpd %ymm2, %ymm0, %ymm0
-; X32-AVX512-NEXT: vinsertf64x4 $1, %ymm2, %zmm2, %zmm2
-; X32-AVX512-NEXT: vaddpd %zmm2, %zmm1, %zmm1
-; X32-AVX512-NEXT: vdivpd %zmm2, %zmm1, %zmm1
-; X32-AVX512-NEXT: vmovupd %ymm0, ga2
-; X32-AVX512-NEXT: vmovupd %zmm1, gb2
-; X32-AVX512-NEXT: vzeroupper
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX-LABEL: fallback_broadcast_v4f64_to_v8f64:
-; X64-AVX: # %bb.0: # %entry
-; X64-AVX-NEXT: vmovapd {{.*#+}} ymm3 = [1.000000e+00,2.000000e+00,3.000000e+00,4.000000e+00]
-; X64-AVX-NEXT: vaddpd %ymm3, %ymm0, %ymm0
-; X64-AVX-NEXT: vaddpd %ymm3, %ymm2, %ymm2
-; X64-AVX-NEXT: vaddpd %ymm3, %ymm1, %ymm1
-; X64-AVX-NEXT: vdivpd %ymm3, %ymm1, %ymm1
-; X64-AVX-NEXT: vdivpd %ymm3, %ymm2, %ymm2
-; X64-AVX-NEXT: vmovupd %ymm0, {{.*}}(%rip)
-; X64-AVX-NEXT: vmovupd %ymm2, gb2+{{.*}}(%rip)
-; X64-AVX-NEXT: vmovupd %ymm1, {{.*}}(%rip)
-; X64-AVX-NEXT: vzeroupper
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512-LABEL: fallback_broadcast_v4f64_to_v8f64:
-; X64-AVX512: # %bb.0: # %entry
-; X64-AVX512-NEXT: vmovapd {{.*#+}} ymm2 = [1.000000e+00,2.000000e+00,3.000000e+00,4.000000e+00]
-; X64-AVX512-NEXT: vaddpd %ymm2, %ymm0, %ymm0
-; X64-AVX512-NEXT: vinsertf64x4 $1, %ymm2, %zmm2, %zmm2
-; X64-AVX512-NEXT: vaddpd %zmm2, %zmm1, %zmm1
-; X64-AVX512-NEXT: vdivpd %zmm2, %zmm1, %zmm1
-; X64-AVX512-NEXT: vmovupd %ymm0, {{.*}}(%rip)
-; X64-AVX512-NEXT: vmovupd %zmm1, {{.*}}(%rip)
-; X64-AVX512-NEXT: vzeroupper
-; X64-AVX512-NEXT: retq
-entry:
- %0 = fadd <4 x double> %a, <double 1.0, double 2.0, double 3.0, double 4.0>
- %1 = fadd <8 x double> %b, <double 1.0, double 2.0, double 3.0, double 4.0, double 1.0, double 2.0, double 3.0, double 4.0>
- %2 = fdiv <8 x double> %1, <double 1.0, double 2.0, double 3.0, double 4.0, double 1.0, double 2.0, double 3.0, double 4.0>
- store <4 x double> %0, <4 x double>* @ga2, align 8
- store <8 x double> %2, <8 x double>* @gb2, align 8
- ret void
-}
-
-;
-; Subvector Broadcast from register
-;
-
-define <4 x double> @reg_broadcast_2f64_4f64(<2 x double> %a0) nounwind {
-; X32-LABEL: reg_broadcast_2f64_4f64:
-; X32: # %bb.0:
-; X32-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-NEXT: retl
-;
-; X64-LABEL: reg_broadcast_2f64_4f64:
-; X64: # %bb.0:
-; X64-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-NEXT: retq
- %1 = shufflevector <2 x double> %a0, <2 x double> undef, <4 x i32> <i32 0, i32 1, i32 0, i32 1>
- ret <4 x double> %1
-}
-
-define <8 x double> @reg_broadcast_2f64_8f64(<2 x double> %a0) nounwind {
-; X32-AVX-LABEL: reg_broadcast_2f64_8f64:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-AVX-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512-LABEL: reg_broadcast_2f64_8f64:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-AVX512-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX512-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX-LABEL: reg_broadcast_2f64_8f64:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-AVX-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512-LABEL: reg_broadcast_2f64_8f64:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-AVX512-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX512-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X64-AVX512-NEXT: retq
- %1 = shufflevector <2 x double> %a0, <2 x double> undef, <8 x i32> <i32 0, i32 1, i32 0, i32 1, i32 0, i32 1, i32 0, i32 1>
- ret <8 x double> %1
-}
-
-define <8 x double> @reg_broadcast_4f64_8f64(<4 x double> %a0) nounwind {
-; X32-AVX-LABEL: reg_broadcast_4f64_8f64:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512-LABEL: reg_broadcast_4f64_8f64:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: # kill: def %ymm0 killed %ymm0 def %zmm0
-; X32-AVX512-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX-LABEL: reg_broadcast_4f64_8f64:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512-LABEL: reg_broadcast_4f64_8f64:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: # kill: def %ymm0 killed %ymm0 def %zmm0
-; X64-AVX512-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X64-AVX512-NEXT: retq
- %1 = shufflevector <4 x double> %a0, <4 x double> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
- ret <8 x double> %1
-}
-
-define <4 x i64> @reg_broadcast_2i64_4i64(<2 x i64> %a0) nounwind {
-; X32-LABEL: reg_broadcast_2i64_4i64:
-; X32: # %bb.0:
-; X32-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-NEXT: retl
-;
-; X64-LABEL: reg_broadcast_2i64_4i64:
-; X64: # %bb.0:
-; X64-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-NEXT: retq
- %1 = shufflevector <2 x i64> %a0, <2 x i64> undef, <4 x i32> <i32 0, i32 1, i32 0, i32 1>
- ret <4 x i64> %1
-}
-
-define <8 x i64> @reg_broadcast_2i64_8i64(<2 x i64> %a0) nounwind {
-; X32-AVX-LABEL: reg_broadcast_2i64_8i64:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-AVX-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512-LABEL: reg_broadcast_2i64_8i64:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-AVX512-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX512-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX-LABEL: reg_broadcast_2i64_8i64:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-AVX-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512-LABEL: reg_broadcast_2i64_8i64:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-AVX512-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX512-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X64-AVX512-NEXT: retq
- %1 = shufflevector <2 x i64> %a0, <2 x i64> undef, <8 x i32> <i32 0, i32 1, i32 0, i32 1, i32 0, i32 1, i32 0, i32 1>
- ret <8 x i64> %1
-}
-
-define <8 x i64> @reg_broadcast_4i64_8i64(<4 x i64> %a0) nounwind {
-; X32-AVX-LABEL: reg_broadcast_4i64_8i64:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512-LABEL: reg_broadcast_4i64_8i64:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: # kill: def %ymm0 killed %ymm0 def %zmm0
-; X32-AVX512-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX-LABEL: reg_broadcast_4i64_8i64:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512-LABEL: reg_broadcast_4i64_8i64:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: # kill: def %ymm0 killed %ymm0 def %zmm0
-; X64-AVX512-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X64-AVX512-NEXT: retq
- %1 = shufflevector <4 x i64> %a0, <4 x i64> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
- ret <8 x i64> %1
-}
-
-define <8 x float> @reg_broadcast_4f32_8f32(<4 x float> %a0) nounwind {
-; X32-LABEL: reg_broadcast_4f32_8f32:
-; X32: # %bb.0:
-; X32-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-NEXT: retl
-;
-; X64-LABEL: reg_broadcast_4f32_8f32:
-; X64: # %bb.0:
-; X64-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-NEXT: retq
- %1 = shufflevector <4 x float> %a0, <4 x float> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
- ret <8 x float> %1
-}
-
-define <16 x float> @reg_broadcast_4f32_16f32(<4 x float> %a0) nounwind {
-; X32-AVX-LABEL: reg_broadcast_4f32_16f32:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-AVX-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512-LABEL: reg_broadcast_4f32_16f32:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-AVX512-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX512-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX-LABEL: reg_broadcast_4f32_16f32:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-AVX-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512-LABEL: reg_broadcast_4f32_16f32:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-AVX512-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX512-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X64-AVX512-NEXT: retq
- %1 = shufflevector <4 x float> %a0, <4 x float> undef, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
- ret <16 x float> %1
-}
-
-define <16 x float> @reg_broadcast_8f32_16f32(<8 x float> %a0) nounwind {
-; X32-AVX-LABEL: reg_broadcast_8f32_16f32:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512-LABEL: reg_broadcast_8f32_16f32:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: # kill: def %ymm0 killed %ymm0 def %zmm0
-; X32-AVX512-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX-LABEL: reg_broadcast_8f32_16f32:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512-LABEL: reg_broadcast_8f32_16f32:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: # kill: def %ymm0 killed %ymm0 def %zmm0
-; X64-AVX512-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X64-AVX512-NEXT: retq
- %1 = shufflevector <8 x float> %a0, <8 x float> undef, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
- ret <16 x float> %1
-}
-
-define <8 x i32> @reg_broadcast_4i32_8i32(<4 x i32> %a0) nounwind {
-; X32-LABEL: reg_broadcast_4i32_8i32:
-; X32: # %bb.0:
-; X32-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-NEXT: retl
-;
-; X64-LABEL: reg_broadcast_4i32_8i32:
-; X64: # %bb.0:
-; X64-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-NEXT: retq
- %1 = shufflevector <4 x i32> %a0, <4 x i32> undef, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
- ret <8 x i32> %1
-}
-
-define <16 x i32> @reg_broadcast_4i32_16i32(<4 x i32> %a0) nounwind {
-; X32-AVX-LABEL: reg_broadcast_4i32_16i32:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-AVX-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512-LABEL: reg_broadcast_4i32_16i32:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-AVX512-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX512-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX-LABEL: reg_broadcast_4i32_16i32:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-AVX-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512-LABEL: reg_broadcast_4i32_16i32:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-AVX512-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX512-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X64-AVX512-NEXT: retq
- %1 = shufflevector <4 x i32> %a0, <4 x i32> undef, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
- ret <16 x i32> %1
-}
-
-define <16 x i32> @reg_broadcast_8i32_16i32(<8 x i32> %a0) nounwind {
-; X32-AVX-LABEL: reg_broadcast_8i32_16i32:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512-LABEL: reg_broadcast_8i32_16i32:
-; X32-AVX512: # %bb.0:
-; X32-AVX512-NEXT: # kill: def %ymm0 killed %ymm0 def %zmm0
-; X32-AVX512-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X32-AVX512-NEXT: retl
-;
-; X64-AVX-LABEL: reg_broadcast_8i32_16i32:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512-LABEL: reg_broadcast_8i32_16i32:
-; X64-AVX512: # %bb.0:
-; X64-AVX512-NEXT: # kill: def %ymm0 killed %ymm0 def %zmm0
-; X64-AVX512-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X64-AVX512-NEXT: retq
- %1 = shufflevector <8 x i32> %a0, <8 x i32> undef, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
- ret <16 x i32> %1
-}
-
-define <16 x i16> @reg_broadcast_8i16_16i16(<8 x i16> %a0) nounwind {
-; X32-LABEL: reg_broadcast_8i16_16i16:
-; X32: # %bb.0:
-; X32-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-NEXT: retl
-;
-; X64-LABEL: reg_broadcast_8i16_16i16:
-; X64: # %bb.0:
-; X64-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-NEXT: retq
- %1 = shufflevector <8 x i16> %a0, <8 x i16> undef, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
- ret <16 x i16> %1
-}
-
-define <32 x i16> @reg_broadcast_8i16_32i16(<8 x i16> %a0) nounwind {
-; X32-AVX-LABEL: reg_broadcast_8i16_32i16:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-AVX-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512F-LABEL: reg_broadcast_8i16_32i16:
-; X32-AVX512F: # %bb.0:
-; X32-AVX512F-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-AVX512F-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX512F-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX512F-NEXT: retl
-;
-; X32-AVX512BW-LABEL: reg_broadcast_8i16_32i16:
-; X32-AVX512BW: # %bb.0:
-; X32-AVX512BW-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-AVX512BW-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX512BW-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X32-AVX512BW-NEXT: retl
-;
-; X32-AVX512DQ-LABEL: reg_broadcast_8i16_32i16:
-; X32-AVX512DQ: # %bb.0:
-; X32-AVX512DQ-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-AVX512DQ-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX512DQ-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX512DQ-NEXT: retl
-;
-; X64-AVX-LABEL: reg_broadcast_8i16_32i16:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-AVX-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512F-LABEL: reg_broadcast_8i16_32i16:
-; X64-AVX512F: # %bb.0:
-; X64-AVX512F-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-AVX512F-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX512F-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX512F-NEXT: retq
-;
-; X64-AVX512BW-LABEL: reg_broadcast_8i16_32i16:
-; X64-AVX512BW: # %bb.0:
-; X64-AVX512BW-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-AVX512BW-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX512BW-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X64-AVX512BW-NEXT: retq
-;
-; X64-AVX512DQ-LABEL: reg_broadcast_8i16_32i16:
-; X64-AVX512DQ: # %bb.0:
-; X64-AVX512DQ-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-AVX512DQ-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX512DQ-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX512DQ-NEXT: retq
- %1 = shufflevector <8 x i16> %a0, <8 x i16> undef, <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7>
- ret <32 x i16> %1
-}
-
-define <32 x i16> @reg_broadcast_16i16_32i16(<16 x i16> %a0) nounwind {
-; X32-AVX-LABEL: reg_broadcast_16i16_32i16:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512F-LABEL: reg_broadcast_16i16_32i16:
-; X32-AVX512F: # %bb.0:
-; X32-AVX512F-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX512F-NEXT: retl
-;
-; X32-AVX512BW-LABEL: reg_broadcast_16i16_32i16:
-; X32-AVX512BW: # %bb.0:
-; X32-AVX512BW-NEXT: # kill: def %ymm0 killed %ymm0 def %zmm0
-; X32-AVX512BW-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X32-AVX512BW-NEXT: retl
-;
-; X32-AVX512DQ-LABEL: reg_broadcast_16i16_32i16:
-; X32-AVX512DQ: # %bb.0:
-; X32-AVX512DQ-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX512DQ-NEXT: retl
-;
-; X64-AVX-LABEL: reg_broadcast_16i16_32i16:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512F-LABEL: reg_broadcast_16i16_32i16:
-; X64-AVX512F: # %bb.0:
-; X64-AVX512F-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX512F-NEXT: retq
-;
-; X64-AVX512BW-LABEL: reg_broadcast_16i16_32i16:
-; X64-AVX512BW: # %bb.0:
-; X64-AVX512BW-NEXT: # kill: def %ymm0 killed %ymm0 def %zmm0
-; X64-AVX512BW-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X64-AVX512BW-NEXT: retq
-;
-; X64-AVX512DQ-LABEL: reg_broadcast_16i16_32i16:
-; X64-AVX512DQ: # %bb.0:
-; X64-AVX512DQ-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX512DQ-NEXT: retq
- %1 = shufflevector <16 x i16> %a0, <16 x i16> undef, <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15>
- ret <32 x i16> %1
-}
-
-define <32 x i8> @reg_broadcast_16i8_32i8(<16 x i8> %a0) nounwind {
-; X32-LABEL: reg_broadcast_16i8_32i8:
-; X32: # %bb.0:
-; X32-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-NEXT: retl
-;
-; X64-LABEL: reg_broadcast_16i8_32i8:
-; X64: # %bb.0:
-; X64-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-NEXT: retq
- %1 = shufflevector <16 x i8> %a0, <16 x i8> undef, <32 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15>
- ret <32 x i8> %1
-}
-
-define <64 x i8> @reg_broadcast_16i8_64i8(<16 x i8> %a0) nounwind {
-; X32-AVX-LABEL: reg_broadcast_16i8_64i8:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-AVX-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512F-LABEL: reg_broadcast_16i8_64i8:
-; X32-AVX512F: # %bb.0:
-; X32-AVX512F-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-AVX512F-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX512F-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX512F-NEXT: retl
-;
-; X32-AVX512BW-LABEL: reg_broadcast_16i8_64i8:
-; X32-AVX512BW: # %bb.0:
-; X32-AVX512BW-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-AVX512BW-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX512BW-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X32-AVX512BW-NEXT: retl
-;
-; X32-AVX512DQ-LABEL: reg_broadcast_16i8_64i8:
-; X32-AVX512DQ: # %bb.0:
-; X32-AVX512DQ-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X32-AVX512DQ-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X32-AVX512DQ-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX512DQ-NEXT: retl
-;
-; X64-AVX-LABEL: reg_broadcast_16i8_64i8:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-AVX-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512F-LABEL: reg_broadcast_16i8_64i8:
-; X64-AVX512F: # %bb.0:
-; X64-AVX512F-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-AVX512F-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX512F-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX512F-NEXT: retq
-;
-; X64-AVX512BW-LABEL: reg_broadcast_16i8_64i8:
-; X64-AVX512BW: # %bb.0:
-; X64-AVX512BW-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-AVX512BW-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX512BW-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X64-AVX512BW-NEXT: retq
-;
-; X64-AVX512DQ-LABEL: reg_broadcast_16i8_64i8:
-; X64-AVX512DQ: # %bb.0:
-; X64-AVX512DQ-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; X64-AVX512DQ-NEXT: vinsertf128 $1, %xmm0, %ymm0, %ymm0
-; X64-AVX512DQ-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX512DQ-NEXT: retq
- %1 = shufflevector <16 x i8> %a0, <16 x i8> undef, <64 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15>
- ret <64 x i8> %1
-}
-
-define <64 x i8> @reg_broadcast_32i8_64i8(<32 x i8> %a0) nounwind {
-; X32-AVX-LABEL: reg_broadcast_32i8_64i8:
-; X32-AVX: # %bb.0:
-; X32-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX-NEXT: retl
-;
-; X32-AVX512F-LABEL: reg_broadcast_32i8_64i8:
-; X32-AVX512F: # %bb.0:
-; X32-AVX512F-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX512F-NEXT: retl
-;
-; X32-AVX512BW-LABEL: reg_broadcast_32i8_64i8:
-; X32-AVX512BW: # %bb.0:
-; X32-AVX512BW-NEXT: # kill: def %ymm0 killed %ymm0 def %zmm0
-; X32-AVX512BW-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X32-AVX512BW-NEXT: retl
-;
-; X32-AVX512DQ-LABEL: reg_broadcast_32i8_64i8:
-; X32-AVX512DQ: # %bb.0:
-; X32-AVX512DQ-NEXT: vmovaps %ymm0, %ymm1
-; X32-AVX512DQ-NEXT: retl
-;
-; X64-AVX-LABEL: reg_broadcast_32i8_64i8:
-; X64-AVX: # %bb.0:
-; X64-AVX-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX-NEXT: retq
-;
-; X64-AVX512F-LABEL: reg_broadcast_32i8_64i8:
-; X64-AVX512F: # %bb.0:
-; X64-AVX512F-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX512F-NEXT: retq
-;
-; X64-AVX512BW-LABEL: reg_broadcast_32i8_64i8:
-; X64-AVX512BW: # %bb.0:
-; X64-AVX512BW-NEXT: # kill: def %ymm0 killed %ymm0 def %zmm0
-; X64-AVX512BW-NEXT: vinsertf64x4 $1, %ymm0, %zmm0, %zmm0
-; X64-AVX512BW-NEXT: retq
-;
-; X64-AVX512DQ-LABEL: reg_broadcast_32i8_64i8:
-; X64-AVX512DQ: # %bb.0:
-; X64-AVX512DQ-NEXT: vmovaps %ymm0, %ymm1
-; X64-AVX512DQ-NEXT: retq
- %1 = shufflevector <32 x i8> %a0, <32 x i8> undef, <64 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15, i32 16, i32 17, i32 18, i32 19, i32 20, i32 21, i32 22, i32 23, i32 24, i32 25, i32 26, i32 27, i32 28, i32 29, i32 30, i32 31>
- ret <64 x i8> %1
-}
diff --git a/test/CodeGen/X86/test-shrink-bug.ll b/test/CodeGen/X86/test-shrink-bug.ll
index 814e07f718b..a79bb0a8c21 100644
--- a/test/CodeGen/X86/test-shrink-bug.ll
+++ b/test/CodeGen/X86/test-shrink-bug.ll
@@ -1,18 +1,39 @@
-; RUN: llc < %s | FileCheck %s
-
-; Codegen shouldn't reduce the comparison down to testb $-1, %al
-; because that changes the result of the signed test.
-; PR5132
-; CHECK: testl $255, %eax
-
-target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:64-f32:32:32-f64:32:64-v64:64:64-v128:128:128-a0:0:64-f80:128:128"
-target triple = "i386-apple-darwin10.0"
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc < %s -mtriple=i386-apple-darwin10.0 | FileCheck %s --check-prefix=CHECK-X86
+; RUN: llc < %s -mtriple=x86_64-grtev4-linux-gnu | FileCheck %s --check-prefix=CHECK-X64
@g_14 = global i8 -6, align 1 ; <i8*> [#uses=1]
declare i32 @func_16(i8 signext %p_19, i32 %p_20) nounwind
define i32 @func_35(i64 %p_38) nounwind ssp {
+; CHECK-X86-LABEL: func_35:
+; CHECK-X86: ## %bb.0: ## %entry
+; CHECK-X86-NEXT: subl $12, %esp
+; CHECK-X86-NEXT: movsbl _g_14, %eax
+; CHECK-X86-NEXT: xorl %ecx, %ecx
+; CHECK-X86-NEXT: testl $255, %eax
+; CHECK-X86-NEXT: setg %cl
+; CHECK-X86-NEXT: subl $8, %esp
+; CHECK-X86-NEXT: pushl %ecx
+; CHECK-X86-NEXT: pushl %eax
+; CHECK-X86-NEXT: calll _func_16
+; CHECK-X86-NEXT: addl $16, %esp
+; CHECK-X86-NEXT: movl $1, %eax
+; CHECK-X86-NEXT: addl $12, %esp
+; CHECK-X86-NEXT: retl
+;
+; CHECK-X64-LABEL: func_35:
+; CHECK-X64: # %bb.0: # %entry
+; CHECK-X64-NEXT: pushq %rax
+; CHECK-X64-NEXT: movsbl {{.*}}(%rip), %edi
+; CHECK-X64-NEXT: xorl %esi, %esi
+; CHECK-X64-NEXT: testl $255, %edi
+; CHECK-X64-NEXT: setg %sil
+; CHECK-X64-NEXT: callq func_16
+; CHECK-X64-NEXT: movl $1, %eax
+; CHECK-X64-NEXT: popq %rcx
+; CHECK-X64-NEXT: retq
entry:
%tmp = load i8, i8* @g_14 ; <i8> [#uses=2]
%conv = zext i8 %tmp to i32 ; <i32> [#uses=1]
@@ -21,3 +42,62 @@ entry:
%call = call i32 @func_16(i8 signext %tmp, i32 %conv2) ssp ; <i32> [#uses=1]
ret i32 1
}
+
+define void @fail(i16 %a, <2 x i8> %b) {
+; CHECK-X86-LABEL: fail:
+; CHECK-X86: ## %bb.0:
+; CHECK-X86-NEXT: subl $12, %esp
+; CHECK-X86-NEXT: .cfi_def_cfa_offset 16
+; CHECK-X86-NEXT: movzwl {{[0-9]+}}(%esp), %ecx
+; CHECK-X86-NEXT: cmpb $123, {{[0-9]+}}(%esp)
+; CHECK-X86-NEXT: sete %al
+; CHECK-X86-NEXT: testl $263, %ecx ## imm = 0x107
+; CHECK-X86-NEXT: je LBB1_2
+; CHECK-X86-NEXT: ## %bb.1:
+; CHECK-X86-NEXT: testb %al, %al
+; CHECK-X86-NEXT: jne LBB1_2
+; CHECK-X86-NEXT: ## %bb.3: ## %no
+; CHECK-X86-NEXT: calll _bar
+; CHECK-X86-NEXT: addl $12, %esp
+; CHECK-X86-NEXT: retl
+; CHECK-X86-NEXT: LBB1_2: ## %yes
+; CHECK-X86-NEXT: addl $12, %esp
+; CHECK-X86-NEXT: retl
+;
+; CHECK-X64-LABEL: fail:
+; CHECK-X64: # %bb.0:
+; CHECK-X64-NEXT: pushq %rax
+; CHECK-X64-NEXT: .cfi_def_cfa_offset 16
+; CHECK-X64-NEXT: andw $263, %di # imm = 0x107
+; CHECK-X64-NEXT: je .LBB1_2
+; CHECK-X64-NEXT: # %bb.1:
+; CHECK-X64-NEXT: pand {{.*}}(%rip), %xmm0
+; CHECK-X64-NEXT: pcmpeqd {{.*}}(%rip), %xmm0
+; CHECK-X64-NEXT: pshufd {{.*#+}} xmm1 = xmm0[1,0,3,2]
+; CHECK-X64-NEXT: pand %xmm0, %xmm1
+; CHECK-X64-NEXT: pextrw $4, %xmm1, %eax
+; CHECK-X64-NEXT: testb $1, %al
+; CHECK-X64-NEXT: jne .LBB1_2
+; CHECK-X64-NEXT: # %bb.3: # %no
+; CHECK-X64-NEXT: callq bar
+; CHECK-X64-NEXT: popq %rax
+; CHECK-X64-NEXT: retq
+; CHECK-X64-NEXT: .LBB1_2: # %yes
+; CHECK-X64-NEXT: popq %rax
+; CHECK-X64-NEXT: retq
+ %1 = icmp eq <2 x i8> %b, <i8 40, i8 123>
+ %2 = extractelement <2 x i1> %1, i32 1
+ %3 = and i16 %a, 263
+ %4 = icmp eq i16 %3, 0
+ %merge = or i1 %4, %2
+ br i1 %merge, label %yes, label %no
+
+yes: ; preds = %0
+ ret void
+
+no: ; preds = %0
+ call void @bar()
+ ret void
+}
+
+declare void @bar()
diff --git a/test/CodeGen/X86/test-shrink.ll b/test/CodeGen/X86/test-shrink.ll
index 9e59f9a2faa..0cc7849e8e4 100644
--- a/test/CodeGen/X86/test-shrink.ll
+++ b/test/CodeGen/X86/test-shrink.ll
@@ -481,4 +481,94 @@ no:
ret void
}
+define void @truncand32(i16 inreg %x) nounwind {
+; CHECK-LINUX64-LABEL: truncand32:
+; CHECK-LINUX64: # %bb.0:
+; CHECK-LINUX64-NEXT: testl $2049, %edi # imm = 0x801
+; CHECK-LINUX64-NEXT: je .LBB11_1
+; CHECK-LINUX64-NEXT: # %bb.2: # %no
+; CHECK-LINUX64-NEXT: retq
+; CHECK-LINUX64-NEXT: .LBB11_1: # %yes
+; CHECK-LINUX64-NEXT: pushq %rax
+; CHECK-LINUX64-NEXT: callq bar
+; CHECK-LINUX64-NEXT: popq %rax
+; CHECK-LINUX64-NEXT: retq
+;
+; CHECK-WIN32-64-LABEL: truncand32:
+; CHECK-WIN32-64: # %bb.0:
+; CHECK-WIN32-64-NEXT: subq $40, %rsp
+; CHECK-WIN32-64-NEXT: testl $2049, %ecx # imm = 0x801
+; CHECK-WIN32-64-NEXT: je .LBB11_1
+; CHECK-WIN32-64-NEXT: # %bb.2: # %no
+; CHECK-WIN32-64-NEXT: addq $40, %rsp
+; CHECK-WIN32-64-NEXT: retq
+; CHECK-WIN32-64-NEXT: .LBB11_1: # %yes
+; CHECK-WIN32-64-NEXT: callq bar
+; CHECK-WIN32-64-NEXT: addq $40, %rsp
+; CHECK-WIN32-64-NEXT: retq
+;
+; CHECK-X86-LABEL: truncand32:
+; CHECK-X86: # %bb.0:
+; CHECK-X86-NEXT: testl $2049, %eax # imm = 0x801
+; CHECK-X86-NEXT: je .LBB11_1
+; CHECK-X86-NEXT: # %bb.2: # %no
+; CHECK-X86-NEXT: retl
+; CHECK-X86-NEXT: .LBB11_1: # %yes
+; CHECK-X86-NEXT: calll bar
+; CHECK-X86-NEXT: retl
+ %t = and i16 %x, 2049
+ %s = icmp eq i16 %t, 0
+ br i1 %s, label %yes, label %no
+
+yes:
+ call void @bar()
+ ret void
+no:
+ ret void
+}
+
+define void @testw(i16 inreg %x) nounwind minsize {
+; CHECK-LINUX64-LABEL: testw:
+; CHECK-LINUX64: # %bb.0:
+; CHECK-LINUX64-NEXT: testw $2049, %di # imm = 0x801
+; CHECK-LINUX64-NEXT: je .LBB12_1
+; CHECK-LINUX64-NEXT: # %bb.2: # %no
+; CHECK-LINUX64-NEXT: retq
+; CHECK-LINUX64-NEXT: .LBB12_1: # %yes
+; CHECK-LINUX64-NEXT: pushq %rax
+; CHECK-LINUX64-NEXT: callq bar
+; CHECK-LINUX64-NEXT: popq %rax
+; CHECK-LINUX64-NEXT: retq
+;
+; CHECK-WIN32-64-LABEL: testw:
+; CHECK-WIN32-64: # %bb.0:
+; CHECK-WIN32-64-NEXT: subq $40, %rsp
+; CHECK-WIN32-64-NEXT: testw $2049, %cx # imm = 0x801
+; CHECK-WIN32-64-NEXT: jne .LBB12_2
+; CHECK-WIN32-64-NEXT: # %bb.1: # %yes
+; CHECK-WIN32-64-NEXT: callq bar
+; CHECK-WIN32-64-NEXT: .LBB12_2: # %no
+; CHECK-WIN32-64-NEXT: addq $40, %rsp
+; CHECK-WIN32-64-NEXT: retq
+;
+; CHECK-X86-LABEL: testw:
+; CHECK-X86: # %bb.0:
+; CHECK-X86-NEXT: testw $2049, %ax # imm = 0x801
+; CHECK-X86-NEXT: je .LBB12_1
+; CHECK-X86-NEXT: # %bb.2: # %no
+; CHECK-X86-NEXT: retl
+; CHECK-X86-NEXT: .LBB12_1: # %yes
+; CHECK-X86-NEXT: calll bar
+; CHECK-X86-NEXT: retl
+ %t = and i16 %x, 2049
+ %s = icmp eq i16 %t, 0
+ br i1 %s, label %yes, label %no
+
+yes:
+ call void @bar()
+ ret void
+no:
+ ret void
+}
+
declare void @bar()
diff --git a/test/CodeGen/X86/testb-je-fusion.ll b/test/CodeGen/X86/testb-je-fusion.ll
index c085a422295..47453ca6791 100644
--- a/test/CodeGen/X86/testb-je-fusion.ll
+++ b/test/CodeGen/X86/testb-je-fusion.ll
@@ -1,11 +1,18 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
; RUN: llc < %s -mtriple=x86_64-- -mcpu=corei7-avx | FileCheck %s
; testb should be scheduled right before je to enable macro-fusion.
-; CHECK: testb $2, %{{[abcd]}}h
-; CHECK-NEXT: je
-
define i32 @check_flag(i32 %flags, ...) nounwind {
+; CHECK-LABEL: check_flag:
+; CHECK: # %bb.0: # %entry
+; CHECK-NEXT: xorl %eax, %eax
+; CHECK-NEXT: testl $512, %edi # imm = 0x200
+; CHECK-NEXT: je .LBB0_2
+; CHECK-NEXT: # %bb.1: # %if.then
+; CHECK-NEXT: movl $1, %eax
+; CHECK-NEXT: .LBB0_2: # %if.end
+; CHECK-NEXT: retq
entry:
%and = and i32 %flags, 512
%tobool = icmp eq i32 %and, 0
diff --git a/test/CodeGen/X86/var-permute-256.ll b/test/CodeGen/X86/var-permute-256.ll
deleted file mode 100644
index b624fb08719..00000000000
--- a/test/CodeGen/X86/var-permute-256.ll
+++ /dev/null
@@ -1,1459 +0,0 @@
-; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
-; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx | FileCheck %s --check-prefixes=AVX,AVXNOVLBW,AVX1
-; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx2 | FileCheck %s --check-prefixes=AVX,AVXNOVLBW,INT256,AVX2
-; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx512f | FileCheck %s --check-prefixes=AVX,AVXNOVLBW,INT256,AVX512,AVX512F
-; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx512f,+avx512vl | FileCheck %s --check-prefixes=AVX,AVXNOVLBW,INT256,AVX512,AVX512VL
-; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx512bw,+avx512vl | FileCheck %s --check-prefixes=AVX,INT256,AVX512,AVX512VLBW
-; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx512bw,+avx512vl,+avx512vbmi | FileCheck %s --check-prefixes=AVX,INT256,AVX512,AVX512VLBW,VBMI
-
-define <4 x i64> @var_shuffle_v4i64(<4 x i64> %v, <4 x i64> %indices) nounwind {
-; AVX1-LABEL: var_shuffle_v4i64:
-; AVX1: # %bb.0:
-; AVX1-NEXT: pushq %rbp
-; AVX1-NEXT: movq %rsp, %rbp
-; AVX1-NEXT: andq $-32, %rsp
-; AVX1-NEXT: subq $64, %rsp
-; AVX1-NEXT: vmovq %xmm1, %rax
-; AVX1-NEXT: andl $3, %eax
-; AVX1-NEXT: vpextrq $1, %xmm1, %rcx
-; AVX1-NEXT: andl $3, %ecx
-; AVX1-NEXT: vextractf128 $1, %ymm1, %xmm1
-; AVX1-NEXT: vmovq %xmm1, %rdx
-; AVX1-NEXT: andl $3, %edx
-; AVX1-NEXT: vpextrq $1, %xmm1, %rsi
-; AVX1-NEXT: andl $3, %esi
-; AVX1-NEXT: vmovaps %ymm0, (%rsp)
-; AVX1-NEXT: vmovsd {{.*#+}} xmm0 = mem[0],zero
-; AVX1-NEXT: vmovsd {{.*#+}} xmm1 = mem[0],zero
-; AVX1-NEXT: vmovlhps {{.*#+}} xmm0 = xmm1[0],xmm0[0]
-; AVX1-NEXT: vmovsd {{.*#+}} xmm1 = mem[0],zero
-; AVX1-NEXT: vmovsd {{.*#+}} xmm2 = mem[0],zero
-; AVX1-NEXT: vmovlhps {{.*#+}} xmm1 = xmm2[0],xmm1[0]
-; AVX1-NEXT: vinsertf128 $1, %xmm0, %ymm1, %ymm0
-; AVX1-NEXT: movq %rbp, %rsp
-; AVX1-NEXT: popq %rbp
-; AVX1-NEXT: retq
-;
-; AVX2-LABEL: var_shuffle_v4i64:
-; AVX2: # %bb.0:
-; AVX2-NEXT: pushq %rbp
-; AVX2-NEXT: movq %rsp, %rbp
-; AVX2-NEXT: andq $-32, %rsp
-; AVX2-NEXT: subq $64, %rsp
-; AVX2-NEXT: vmovq %xmm1, %rax
-; AVX2-NEXT: andl $3, %eax
-; AVX2-NEXT: vpextrq $1, %xmm1, %rcx
-; AVX2-NEXT: andl $3, %ecx
-; AVX2-NEXT: vextracti128 $1, %ymm1, %xmm1
-; AVX2-NEXT: vmovq %xmm1, %rdx
-; AVX2-NEXT: andl $3, %edx
-; AVX2-NEXT: vpextrq $1, %xmm1, %rsi
-; AVX2-NEXT: andl $3, %esi
-; AVX2-NEXT: vmovaps %ymm0, (%rsp)
-; AVX2-NEXT: vmovsd {{.*#+}} xmm0 = mem[0],zero
-; AVX2-NEXT: vmovsd {{.*#+}} xmm1 = mem[0],zero
-; AVX2-NEXT: vmovlhps {{.*#+}} xmm0 = xmm1[0],xmm0[0]
-; AVX2-NEXT: vmovsd {{.*#+}} xmm1 = mem[0],zero
-; AVX2-NEXT: vmovsd {{.*#+}} xmm2 = mem[0],zero
-; AVX2-NEXT: vmovlhps {{.*#+}} xmm1 = xmm2[0],xmm1[0]
-; AVX2-NEXT: vinsertf128 $1, %xmm0, %ymm1, %ymm0
-; AVX2-NEXT: movq %rbp, %rsp
-; AVX2-NEXT: popq %rbp
-; AVX2-NEXT: retq
-;
-; AVX512F-LABEL: var_shuffle_v4i64:
-; AVX512F: # %bb.0:
-; AVX512F-NEXT: pushq %rbp
-; AVX512F-NEXT: movq %rsp, %rbp
-; AVX512F-NEXT: andq $-32, %rsp
-; AVX512F-NEXT: subq $64, %rsp
-; AVX512F-NEXT: vmovq %xmm1, %rax
-; AVX512F-NEXT: andl $3, %eax
-; AVX512F-NEXT: vpextrq $1, %xmm1, %rcx
-; AVX512F-NEXT: andl $3, %ecx
-; AVX512F-NEXT: vextracti128 $1, %ymm1, %xmm1
-; AVX512F-NEXT: vmovq %xmm1, %rdx
-; AVX512F-NEXT: andl $3, %edx
-; AVX512F-NEXT: vpextrq $1, %xmm1, %rsi
-; AVX512F-NEXT: andl $3, %esi
-; AVX512F-NEXT: vmovaps %ymm0, (%rsp)
-; AVX512F-NEXT: vmovsd {{.*#+}} xmm0 = mem[0],zero
-; AVX512F-NEXT: vmovsd {{.*#+}} xmm1 = mem[0],zero
-; AVX512F-NEXT: vmovlhps {{.*#+}} xmm0 = xmm1[0],xmm0[0]
-; AVX512F-NEXT: vmovsd {{.*#+}} xmm1 = mem[0],zero
-; AVX512F-NEXT: vmovsd {{.*#+}} xmm2 = mem[0],zero
-; AVX512F-NEXT: vmovlhps {{.*#+}} xmm1 = xmm2[0],xmm1[0]
-; AVX512F-NEXT: vinsertf128 $1, %xmm0, %ymm1, %ymm0
-; AVX512F-NEXT: movq %rbp, %rsp
-; AVX512F-NEXT: popq %rbp
-; AVX512F-NEXT: retq
-;
-; AVX512VL-LABEL: var_shuffle_v4i64:
-; AVX512VL: # %bb.0:
-; AVX512VL-NEXT: vpermpd %ymm0, %ymm1, %ymm0
-; AVX512VL-NEXT: retq
-;
-; AVX512VLBW-LABEL: var_shuffle_v4i64:
-; AVX512VLBW: # %bb.0:
-; AVX512VLBW-NEXT: vpermpd %ymm0, %ymm1, %ymm0
-; AVX512VLBW-NEXT: retq
- %index0 = extractelement <4 x i64> %indices, i32 0
- %index1 = extractelement <4 x i64> %indices, i32 1
- %index2 = extractelement <4 x i64> %indices, i32 2
- %index3 = extractelement <4 x i64> %indices, i32 3
- %v0 = extractelement <4 x i64> %v, i64 %index0
- %v1 = extractelement <4 x i64> %v, i64 %index1
- %v2 = extractelement <4 x i64> %v, i64 %index2
- %v3 = extractelement <4 x i64> %v, i64 %index3
- %ret0 = insertelement <4 x i64> undef, i64 %v0, i32 0
- %ret1 = insertelement <4 x i64> %ret0, i64 %v1, i32 1
- %ret2 = insertelement <4 x i64> %ret1, i64 %v2, i32 2
- %ret3 = insertelement <4 x i64> %ret2, i64 %v3, i32 3
- ret <4 x i64> %ret3
-}
-
-define <8 x i32> @var_shuffle_v8i32(<8 x i32> %v, <8 x i32> %indices) nounwind {
-; AVX1-LABEL: var_shuffle_v8i32:
-; AVX1: # %bb.0:
-; AVX1-NEXT: pushq %rbp
-; AVX1-NEXT: movq %rsp, %rbp
-; AVX1-NEXT: andq $-32, %rsp
-; AVX1-NEXT: subq $64, %rsp
-; AVX1-NEXT: vpextrq $1, %xmm1, %r8
-; AVX1-NEXT: movq %r8, %rcx
-; AVX1-NEXT: shrq $30, %rcx
-; AVX1-NEXT: vmovq %xmm1, %r9
-; AVX1-NEXT: movq %r9, %rsi
-; AVX1-NEXT: shrq $30, %rsi
-; AVX1-NEXT: vextractf128 $1, %ymm1, %xmm1
-; AVX1-NEXT: vpextrq $1, %xmm1, %r10
-; AVX1-NEXT: movq %r10, %rdi
-; AVX1-NEXT: shrq $30, %rdi
-; AVX1-NEXT: vmovq %xmm1, %rax
-; AVX1-NEXT: movq %rax, %rdx
-; AVX1-NEXT: shrq $30, %rdx
-; AVX1-NEXT: vmovaps %ymm0, (%rsp)
-; AVX1-NEXT: andl $7, %r9d
-; AVX1-NEXT: andl $28, %esi
-; AVX1-NEXT: andl $7, %r8d
-; AVX1-NEXT: andl $28, %ecx
-; AVX1-NEXT: andl $7, %eax
-; AVX1-NEXT: andl $28, %edx
-; AVX1-NEXT: andl $7, %r10d
-; AVX1-NEXT: andl $28, %edi
-; AVX1-NEXT: vmovd {{.*#+}} xmm0 = mem[0],zero,zero,zero
-; AVX1-NEXT: vpinsrd $1, (%rsp,%rdx), %xmm0, %xmm0
-; AVX1-NEXT: vpinsrd $2, (%rsp,%r10,4), %xmm0, %xmm0
-; AVX1-NEXT: vpinsrd $3, (%rsp,%rdi), %xmm0, %xmm0
-; AVX1-NEXT: vmovd {{.*#+}} xmm1 = mem[0],zero,zero,zero
-; AVX1-NEXT: vpinsrd $1, (%rsp,%rsi), %xmm1, %xmm1
-; AVX1-NEXT: vpinsrd $2, (%rsp,%r8,4), %xmm1, %xmm1
-; AVX1-NEXT: vpinsrd $3, (%rsp,%rcx), %xmm1, %xmm1
-; AVX1-NEXT: vinsertf128 $1, %xmm0, %ymm1, %ymm0
-; AVX1-NEXT: movq %rbp, %rsp
-; AVX1-NEXT: popq %rbp
-; AVX1-NEXT: retq
-;
-; INT256-LABEL: var_shuffle_v8i32:
-; INT256: # %bb.0:
-; INT256-NEXT: vpermps %ymm0, %ymm1, %ymm0
-; INT256-NEXT: retq
- %index0 = extractelement <8 x i32> %indices, i32 0
- %index1 = extractelement <8 x i32> %indices, i32 1
- %index2 = extractelement <8 x i32> %indices, i32 2
- %index3 = extractelement <8 x i32> %indices, i32 3
- %index4 = extractelement <8 x i32> %indices, i32 4
- %index5 = extractelement <8 x i32> %indices, i32 5
- %index6 = extractelement <8 x i32> %indices, i32 6
- %index7 = extractelement <8 x i32> %indices, i32 7
- %v0 = extractelement <8 x i32> %v, i32 %index0
- %v1 = extractelement <8 x i32> %v, i32 %index1
- %v2 = extractelement <8 x i32> %v, i32 %index2
- %v3 = extractelement <8 x i32> %v, i32 %index3
- %v4 = extractelement <8 x i32> %v, i32 %index4
- %v5 = extractelement <8 x i32> %v, i32 %index5
- %v6 = extractelement <8 x i32> %v, i32 %index6
- %v7 = extractelement <8 x i32> %v, i32 %index7
- %ret0 = insertelement <8 x i32> undef, i32 %v0, i32 0
- %ret1 = insertelement <8 x i32> %ret0, i32 %v1, i32 1
- %ret2 = insertelement <8 x i32> %ret1, i32 %v2, i32 2
- %ret3 = insertelement <8 x i32> %ret2, i32 %v3, i32 3
- %ret4 = insertelement <8 x i32> %ret3, i32 %v4, i32 4
- %ret5 = insertelement <8 x i32> %ret4, i32 %v5, i32 5
- %ret6 = insertelement <8 x i32> %ret5, i32 %v6, i32 6
- %ret7 = insertelement <8 x i32> %ret6, i32 %v7, i32 7
- ret <8 x i32> %ret7
-}
-
-define <16 x i16> @var_shuffle_v16i16(<16 x i16> %v, <16 x i16> %indices) nounwind {
-; AVX1-LABEL: var_shuffle_v16i16:
-; AVX1: # %bb.0:
-; AVX1-NEXT: pushq %rbp
-; AVX1-NEXT: movq %rsp, %rbp
-; AVX1-NEXT: andq $-32, %rsp
-; AVX1-NEXT: subq $64, %rsp
-; AVX1-NEXT: vextractf128 $1, %ymm1, %xmm2
-; AVX1-NEXT: vmovd %xmm2, %eax
-; AVX1-NEXT: vmovaps %ymm0, (%rsp)
-; AVX1-NEXT: andl $15, %eax
-; AVX1-NEXT: movzwl (%rsp,%rax,2), %eax
-; AVX1-NEXT: vmovd %eax, %xmm0
-; AVX1-NEXT: vpextrw $1, %xmm2, %eax
-; AVX1-NEXT: andl $15, %eax
-; AVX1-NEXT: vpinsrw $1, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX1-NEXT: vpextrw $2, %xmm2, %eax
-; AVX1-NEXT: andl $15, %eax
-; AVX1-NEXT: vpinsrw $2, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX1-NEXT: vpextrw $3, %xmm2, %eax
-; AVX1-NEXT: andl $15, %eax
-; AVX1-NEXT: vpinsrw $3, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX1-NEXT: vpextrw $4, %xmm2, %eax
-; AVX1-NEXT: andl $15, %eax
-; AVX1-NEXT: vpinsrw $4, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX1-NEXT: vpextrw $5, %xmm2, %eax
-; AVX1-NEXT: andl $15, %eax
-; AVX1-NEXT: vpinsrw $5, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX1-NEXT: vpextrw $6, %xmm2, %eax
-; AVX1-NEXT: andl $15, %eax
-; AVX1-NEXT: vpinsrw $6, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX1-NEXT: vpextrw $7, %xmm2, %eax
-; AVX1-NEXT: andl $15, %eax
-; AVX1-NEXT: vpinsrw $7, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX1-NEXT: vmovd %xmm1, %eax
-; AVX1-NEXT: andl $15, %eax
-; AVX1-NEXT: movzwl (%rsp,%rax,2), %eax
-; AVX1-NEXT: vmovd %eax, %xmm2
-; AVX1-NEXT: vpextrw $1, %xmm1, %eax
-; AVX1-NEXT: andl $15, %eax
-; AVX1-NEXT: vpinsrw $1, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX1-NEXT: vpextrw $2, %xmm1, %eax
-; AVX1-NEXT: andl $15, %eax
-; AVX1-NEXT: vpinsrw $2, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX1-NEXT: vpextrw $3, %xmm1, %eax
-; AVX1-NEXT: andl $15, %eax
-; AVX1-NEXT: vpinsrw $3, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX1-NEXT: vpextrw $4, %xmm1, %eax
-; AVX1-NEXT: andl $15, %eax
-; AVX1-NEXT: vpinsrw $4, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX1-NEXT: vpextrw $5, %xmm1, %eax
-; AVX1-NEXT: andl $15, %eax
-; AVX1-NEXT: vpinsrw $5, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX1-NEXT: vpextrw $6, %xmm1, %eax
-; AVX1-NEXT: andl $15, %eax
-; AVX1-NEXT: vpinsrw $6, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX1-NEXT: vpextrw $7, %xmm1, %eax
-; AVX1-NEXT: andl $15, %eax
-; AVX1-NEXT: vpinsrw $7, (%rsp,%rax,2), %xmm2, %xmm1
-; AVX1-NEXT: vinsertf128 $1, %xmm0, %ymm1, %ymm0
-; AVX1-NEXT: movq %rbp, %rsp
-; AVX1-NEXT: popq %rbp
-; AVX1-NEXT: retq
-;
-; AVX2-LABEL: var_shuffle_v16i16:
-; AVX2: # %bb.0:
-; AVX2-NEXT: pushq %rbp
-; AVX2-NEXT: movq %rsp, %rbp
-; AVX2-NEXT: andq $-32, %rsp
-; AVX2-NEXT: subq $64, %rsp
-; AVX2-NEXT: vextracti128 $1, %ymm1, %xmm2
-; AVX2-NEXT: vmovd %xmm2, %eax
-; AVX2-NEXT: vmovaps %ymm0, (%rsp)
-; AVX2-NEXT: andl $15, %eax
-; AVX2-NEXT: movzwl (%rsp,%rax,2), %eax
-; AVX2-NEXT: vmovd %eax, %xmm0
-; AVX2-NEXT: vpextrw $1, %xmm2, %eax
-; AVX2-NEXT: andl $15, %eax
-; AVX2-NEXT: vpinsrw $1, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX2-NEXT: vpextrw $2, %xmm2, %eax
-; AVX2-NEXT: andl $15, %eax
-; AVX2-NEXT: vpinsrw $2, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX2-NEXT: vpextrw $3, %xmm2, %eax
-; AVX2-NEXT: andl $15, %eax
-; AVX2-NEXT: vpinsrw $3, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX2-NEXT: vpextrw $4, %xmm2, %eax
-; AVX2-NEXT: andl $15, %eax
-; AVX2-NEXT: vpinsrw $4, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX2-NEXT: vpextrw $5, %xmm2, %eax
-; AVX2-NEXT: andl $15, %eax
-; AVX2-NEXT: vpinsrw $5, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX2-NEXT: vpextrw $6, %xmm2, %eax
-; AVX2-NEXT: andl $15, %eax
-; AVX2-NEXT: vpinsrw $6, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX2-NEXT: vpextrw $7, %xmm2, %eax
-; AVX2-NEXT: andl $15, %eax
-; AVX2-NEXT: vpinsrw $7, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX2-NEXT: vmovd %xmm1, %eax
-; AVX2-NEXT: andl $15, %eax
-; AVX2-NEXT: movzwl (%rsp,%rax,2), %eax
-; AVX2-NEXT: vmovd %eax, %xmm2
-; AVX2-NEXT: vpextrw $1, %xmm1, %eax
-; AVX2-NEXT: andl $15, %eax
-; AVX2-NEXT: vpinsrw $1, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX2-NEXT: vpextrw $2, %xmm1, %eax
-; AVX2-NEXT: andl $15, %eax
-; AVX2-NEXT: vpinsrw $2, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX2-NEXT: vpextrw $3, %xmm1, %eax
-; AVX2-NEXT: andl $15, %eax
-; AVX2-NEXT: vpinsrw $3, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX2-NEXT: vpextrw $4, %xmm1, %eax
-; AVX2-NEXT: andl $15, %eax
-; AVX2-NEXT: vpinsrw $4, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX2-NEXT: vpextrw $5, %xmm1, %eax
-; AVX2-NEXT: andl $15, %eax
-; AVX2-NEXT: vpinsrw $5, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX2-NEXT: vpextrw $6, %xmm1, %eax
-; AVX2-NEXT: andl $15, %eax
-; AVX2-NEXT: vpinsrw $6, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX2-NEXT: vpextrw $7, %xmm1, %eax
-; AVX2-NEXT: andl $15, %eax
-; AVX2-NEXT: vpinsrw $7, (%rsp,%rax,2), %xmm2, %xmm1
-; AVX2-NEXT: vinserti128 $1, %xmm0, %ymm1, %ymm0
-; AVX2-NEXT: movq %rbp, %rsp
-; AVX2-NEXT: popq %rbp
-; AVX2-NEXT: retq
-;
-; AVX512F-LABEL: var_shuffle_v16i16:
-; AVX512F: # %bb.0:
-; AVX512F-NEXT: pushq %rbp
-; AVX512F-NEXT: movq %rsp, %rbp
-; AVX512F-NEXT: andq $-32, %rsp
-; AVX512F-NEXT: subq $64, %rsp
-; AVX512F-NEXT: vextracti128 $1, %ymm1, %xmm2
-; AVX512F-NEXT: vmovd %xmm2, %eax
-; AVX512F-NEXT: vmovaps %ymm0, (%rsp)
-; AVX512F-NEXT: andl $15, %eax
-; AVX512F-NEXT: movzwl (%rsp,%rax,2), %eax
-; AVX512F-NEXT: vmovd %eax, %xmm0
-; AVX512F-NEXT: vpextrw $1, %xmm2, %eax
-; AVX512F-NEXT: andl $15, %eax
-; AVX512F-NEXT: vpinsrw $1, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX512F-NEXT: vpextrw $2, %xmm2, %eax
-; AVX512F-NEXT: andl $15, %eax
-; AVX512F-NEXT: vpinsrw $2, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX512F-NEXT: vpextrw $3, %xmm2, %eax
-; AVX512F-NEXT: andl $15, %eax
-; AVX512F-NEXT: vpinsrw $3, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX512F-NEXT: vpextrw $4, %xmm2, %eax
-; AVX512F-NEXT: andl $15, %eax
-; AVX512F-NEXT: vpinsrw $4, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX512F-NEXT: vpextrw $5, %xmm2, %eax
-; AVX512F-NEXT: andl $15, %eax
-; AVX512F-NEXT: vpinsrw $5, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX512F-NEXT: vpextrw $6, %xmm2, %eax
-; AVX512F-NEXT: andl $15, %eax
-; AVX512F-NEXT: vpinsrw $6, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX512F-NEXT: vpextrw $7, %xmm2, %eax
-; AVX512F-NEXT: andl $15, %eax
-; AVX512F-NEXT: vpinsrw $7, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX512F-NEXT: vmovd %xmm1, %eax
-; AVX512F-NEXT: andl $15, %eax
-; AVX512F-NEXT: movzwl (%rsp,%rax,2), %eax
-; AVX512F-NEXT: vmovd %eax, %xmm2
-; AVX512F-NEXT: vpextrw $1, %xmm1, %eax
-; AVX512F-NEXT: andl $15, %eax
-; AVX512F-NEXT: vpinsrw $1, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrw $2, %xmm1, %eax
-; AVX512F-NEXT: andl $15, %eax
-; AVX512F-NEXT: vpinsrw $2, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrw $3, %xmm1, %eax
-; AVX512F-NEXT: andl $15, %eax
-; AVX512F-NEXT: vpinsrw $3, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrw $4, %xmm1, %eax
-; AVX512F-NEXT: andl $15, %eax
-; AVX512F-NEXT: vpinsrw $4, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrw $5, %xmm1, %eax
-; AVX512F-NEXT: andl $15, %eax
-; AVX512F-NEXT: vpinsrw $5, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrw $6, %xmm1, %eax
-; AVX512F-NEXT: andl $15, %eax
-; AVX512F-NEXT: vpinsrw $6, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrw $7, %xmm1, %eax
-; AVX512F-NEXT: andl $15, %eax
-; AVX512F-NEXT: vpinsrw $7, (%rsp,%rax,2), %xmm2, %xmm1
-; AVX512F-NEXT: vinserti128 $1, %xmm0, %ymm1, %ymm0
-; AVX512F-NEXT: movq %rbp, %rsp
-; AVX512F-NEXT: popq %rbp
-; AVX512F-NEXT: retq
-;
-; AVX512VL-LABEL: var_shuffle_v16i16:
-; AVX512VL: # %bb.0:
-; AVX512VL-NEXT: pushq %rbp
-; AVX512VL-NEXT: movq %rsp, %rbp
-; AVX512VL-NEXT: andq $-32, %rsp
-; AVX512VL-NEXT: subq $64, %rsp
-; AVX512VL-NEXT: vextracti128 $1, %ymm1, %xmm2
-; AVX512VL-NEXT: vmovd %xmm2, %eax
-; AVX512VL-NEXT: vmovaps %ymm0, (%rsp)
-; AVX512VL-NEXT: andl $15, %eax
-; AVX512VL-NEXT: movzwl (%rsp,%rax,2), %eax
-; AVX512VL-NEXT: vmovd %eax, %xmm0
-; AVX512VL-NEXT: vpextrw $1, %xmm2, %eax
-; AVX512VL-NEXT: andl $15, %eax
-; AVX512VL-NEXT: vpinsrw $1, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrw $2, %xmm2, %eax
-; AVX512VL-NEXT: andl $15, %eax
-; AVX512VL-NEXT: vpinsrw $2, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrw $3, %xmm2, %eax
-; AVX512VL-NEXT: andl $15, %eax
-; AVX512VL-NEXT: vpinsrw $3, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrw $4, %xmm2, %eax
-; AVX512VL-NEXT: andl $15, %eax
-; AVX512VL-NEXT: vpinsrw $4, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrw $5, %xmm2, %eax
-; AVX512VL-NEXT: andl $15, %eax
-; AVX512VL-NEXT: vpinsrw $5, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrw $6, %xmm2, %eax
-; AVX512VL-NEXT: andl $15, %eax
-; AVX512VL-NEXT: vpinsrw $6, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrw $7, %xmm2, %eax
-; AVX512VL-NEXT: andl $15, %eax
-; AVX512VL-NEXT: vpinsrw $7, (%rsp,%rax,2), %xmm0, %xmm0
-; AVX512VL-NEXT: vmovd %xmm1, %eax
-; AVX512VL-NEXT: andl $15, %eax
-; AVX512VL-NEXT: movzwl (%rsp,%rax,2), %eax
-; AVX512VL-NEXT: vmovd %eax, %xmm2
-; AVX512VL-NEXT: vpextrw $1, %xmm1, %eax
-; AVX512VL-NEXT: andl $15, %eax
-; AVX512VL-NEXT: vpinsrw $1, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrw $2, %xmm1, %eax
-; AVX512VL-NEXT: andl $15, %eax
-; AVX512VL-NEXT: vpinsrw $2, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrw $3, %xmm1, %eax
-; AVX512VL-NEXT: andl $15, %eax
-; AVX512VL-NEXT: vpinsrw $3, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrw $4, %xmm1, %eax
-; AVX512VL-NEXT: andl $15, %eax
-; AVX512VL-NEXT: vpinsrw $4, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrw $5, %xmm1, %eax
-; AVX512VL-NEXT: andl $15, %eax
-; AVX512VL-NEXT: vpinsrw $5, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrw $6, %xmm1, %eax
-; AVX512VL-NEXT: andl $15, %eax
-; AVX512VL-NEXT: vpinsrw $6, (%rsp,%rax,2), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrw $7, %xmm1, %eax
-; AVX512VL-NEXT: andl $15, %eax
-; AVX512VL-NEXT: vpinsrw $7, (%rsp,%rax,2), %xmm2, %xmm1
-; AVX512VL-NEXT: vinserti128 $1, %xmm0, %ymm1, %ymm0
-; AVX512VL-NEXT: movq %rbp, %rsp
-; AVX512VL-NEXT: popq %rbp
-; AVX512VL-NEXT: retq
-;
-; AVX512VLBW-LABEL: var_shuffle_v16i16:
-; AVX512VLBW: # %bb.0:
-; AVX512VLBW-NEXT: vpermw %ymm0, %ymm1, %ymm0
-; AVX512VLBW-NEXT: retq
- %index0 = extractelement <16 x i16> %indices, i32 0
- %index1 = extractelement <16 x i16> %indices, i32 1
- %index2 = extractelement <16 x i16> %indices, i32 2
- %index3 = extractelement <16 x i16> %indices, i32 3
- %index4 = extractelement <16 x i16> %indices, i32 4
- %index5 = extractelement <16 x i16> %indices, i32 5
- %index6 = extractelement <16 x i16> %indices, i32 6
- %index7 = extractelement <16 x i16> %indices, i32 7
- %index8 = extractelement <16 x i16> %indices, i32 8
- %index9 = extractelement <16 x i16> %indices, i32 9
- %index10 = extractelement <16 x i16> %indices, i32 10
- %index11 = extractelement <16 x i16> %indices, i32 11
- %index12 = extractelement <16 x i16> %indices, i32 12
- %index13 = extractelement <16 x i16> %indices, i32 13
- %index14 = extractelement <16 x i16> %indices, i32 14
- %index15 = extractelement <16 x i16> %indices, i32 15
- %v0 = extractelement <16 x i16> %v, i16 %index0
- %v1 = extractelement <16 x i16> %v, i16 %index1
- %v2 = extractelement <16 x i16> %v, i16 %index2
- %v3 = extractelement <16 x i16> %v, i16 %index3
- %v4 = extractelement <16 x i16> %v, i16 %index4
- %v5 = extractelement <16 x i16> %v, i16 %index5
- %v6 = extractelement <16 x i16> %v, i16 %index6
- %v7 = extractelement <16 x i16> %v, i16 %index7
- %v8 = extractelement <16 x i16> %v, i16 %index8
- %v9 = extractelement <16 x i16> %v, i16 %index9
- %v10 = extractelement <16 x i16> %v, i16 %index10
- %v11 = extractelement <16 x i16> %v, i16 %index11
- %v12 = extractelement <16 x i16> %v, i16 %index12
- %v13 = extractelement <16 x i16> %v, i16 %index13
- %v14 = extractelement <16 x i16> %v, i16 %index14
- %v15 = extractelement <16 x i16> %v, i16 %index15
- %ret0 = insertelement <16 x i16> undef, i16 %v0, i32 0
- %ret1 = insertelement <16 x i16> %ret0, i16 %v1, i32 1
- %ret2 = insertelement <16 x i16> %ret1, i16 %v2, i32 2
- %ret3 = insertelement <16 x i16> %ret2, i16 %v3, i32 3
- %ret4 = insertelement <16 x i16> %ret3, i16 %v4, i32 4
- %ret5 = insertelement <16 x i16> %ret4, i16 %v5, i32 5
- %ret6 = insertelement <16 x i16> %ret5, i16 %v6, i32 6
- %ret7 = insertelement <16 x i16> %ret6, i16 %v7, i32 7
- %ret8 = insertelement <16 x i16> %ret7, i16 %v8, i32 8
- %ret9 = insertelement <16 x i16> %ret8, i16 %v9, i32 9
- %ret10 = insertelement <16 x i16> %ret9, i16 %v10, i32 10
- %ret11 = insertelement <16 x i16> %ret10, i16 %v11, i32 11
- %ret12 = insertelement <16 x i16> %ret11, i16 %v12, i32 12
- %ret13 = insertelement <16 x i16> %ret12, i16 %v13, i32 13
- %ret14 = insertelement <16 x i16> %ret13, i16 %v14, i32 14
- %ret15 = insertelement <16 x i16> %ret14, i16 %v15, i32 15
- ret <16 x i16> %ret15
-}
-
-define <32 x i8> @var_shuffle_v32i8(<32 x i8> %v, <32 x i8> %indices) nounwind {
-; AVX1-LABEL: var_shuffle_v32i8:
-; AVX1: # %bb.0:
-; AVX1-NEXT: pushq %rbp
-; AVX1-NEXT: movq %rsp, %rbp
-; AVX1-NEXT: andq $-32, %rsp
-; AVX1-NEXT: subq $64, %rsp
-; AVX1-NEXT: vextractf128 $1, %ymm1, %xmm2
-; AVX1-NEXT: vpextrb $0, %xmm2, %eax
-; AVX1-NEXT: vmovaps %ymm0, (%rsp)
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: movzbl (%rsp,%rax), %eax
-; AVX1-NEXT: vmovd %eax, %xmm0
-; AVX1-NEXT: vpextrb $1, %xmm2, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: movzbl (%rsp,%rax), %eax
-; AVX1-NEXT: vpinsrb $1, %eax, %xmm0, %xmm0
-; AVX1-NEXT: vpextrb $2, %xmm2, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: movzbl (%rsp,%rax), %eax
-; AVX1-NEXT: vpinsrb $2, %eax, %xmm0, %xmm0
-; AVX1-NEXT: vpextrb $3, %xmm2, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: movzbl (%rsp,%rax), %eax
-; AVX1-NEXT: vpinsrb $3, %eax, %xmm0, %xmm0
-; AVX1-NEXT: vpextrb $4, %xmm2, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: movzbl (%rsp,%rax), %eax
-; AVX1-NEXT: vpinsrb $4, %eax, %xmm0, %xmm0
-; AVX1-NEXT: vpextrb $5, %xmm2, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: movzbl (%rsp,%rax), %eax
-; AVX1-NEXT: vpinsrb $5, %eax, %xmm0, %xmm0
-; AVX1-NEXT: vpextrb $6, %xmm2, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: movzbl (%rsp,%rax), %eax
-; AVX1-NEXT: vpinsrb $6, %eax, %xmm0, %xmm0
-; AVX1-NEXT: vpextrb $7, %xmm2, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: movzbl (%rsp,%rax), %eax
-; AVX1-NEXT: vpinsrb $7, %eax, %xmm0, %xmm0
-; AVX1-NEXT: vpextrb $8, %xmm2, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: movzbl (%rsp,%rax), %eax
-; AVX1-NEXT: vpinsrb $8, %eax, %xmm0, %xmm0
-; AVX1-NEXT: vpextrb $9, %xmm2, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: movzbl (%rsp,%rax), %eax
-; AVX1-NEXT: vpinsrb $9, %eax, %xmm0, %xmm0
-; AVX1-NEXT: vpextrb $10, %xmm2, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: movzbl (%rsp,%rax), %eax
-; AVX1-NEXT: vpinsrb $10, %eax, %xmm0, %xmm0
-; AVX1-NEXT: vpextrb $11, %xmm2, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: movzbl (%rsp,%rax), %eax
-; AVX1-NEXT: vpinsrb $11, %eax, %xmm0, %xmm0
-; AVX1-NEXT: vpextrb $12, %xmm2, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: movzbl (%rsp,%rax), %eax
-; AVX1-NEXT: vpinsrb $12, %eax, %xmm0, %xmm0
-; AVX1-NEXT: vpextrb $13, %xmm2, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: movzbl (%rsp,%rax), %eax
-; AVX1-NEXT: vpinsrb $13, %eax, %xmm0, %xmm0
-; AVX1-NEXT: vpextrb $14, %xmm2, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: movzbl (%rsp,%rax), %eax
-; AVX1-NEXT: vpinsrb $14, %eax, %xmm0, %xmm0
-; AVX1-NEXT: vpextrb $15, %xmm2, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: movzbl (%rsp,%rax), %eax
-; AVX1-NEXT: vpinsrb $15, %eax, %xmm0, %xmm0
-; AVX1-NEXT: vpextrb $0, %xmm1, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: movzbl (%rsp,%rax), %eax
-; AVX1-NEXT: vmovd %eax, %xmm2
-; AVX1-NEXT: vpextrb $1, %xmm1, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: vpinsrb $1, (%rsp,%rax), %xmm2, %xmm2
-; AVX1-NEXT: vpextrb $2, %xmm1, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: vpinsrb $2, (%rsp,%rax), %xmm2, %xmm2
-; AVX1-NEXT: vpextrb $3, %xmm1, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: vpinsrb $3, (%rsp,%rax), %xmm2, %xmm2
-; AVX1-NEXT: vpextrb $4, %xmm1, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: vpinsrb $4, (%rsp,%rax), %xmm2, %xmm2
-; AVX1-NEXT: vpextrb $5, %xmm1, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: vpinsrb $5, (%rsp,%rax), %xmm2, %xmm2
-; AVX1-NEXT: vpextrb $6, %xmm1, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: vpinsrb $6, (%rsp,%rax), %xmm2, %xmm2
-; AVX1-NEXT: vpextrb $7, %xmm1, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: vpinsrb $7, (%rsp,%rax), %xmm2, %xmm2
-; AVX1-NEXT: vpextrb $8, %xmm1, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: vpinsrb $8, (%rsp,%rax), %xmm2, %xmm2
-; AVX1-NEXT: vpextrb $9, %xmm1, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: vpinsrb $9, (%rsp,%rax), %xmm2, %xmm2
-; AVX1-NEXT: vpextrb $10, %xmm1, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: vpinsrb $10, (%rsp,%rax), %xmm2, %xmm2
-; AVX1-NEXT: vpextrb $11, %xmm1, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: vpinsrb $11, (%rsp,%rax), %xmm2, %xmm2
-; AVX1-NEXT: vpextrb $12, %xmm1, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: vpinsrb $12, (%rsp,%rax), %xmm2, %xmm2
-; AVX1-NEXT: vpextrb $13, %xmm1, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: vpinsrb $13, (%rsp,%rax), %xmm2, %xmm2
-; AVX1-NEXT: vpextrb $14, %xmm1, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: vpinsrb $14, (%rsp,%rax), %xmm2, %xmm2
-; AVX1-NEXT: vpextrb $15, %xmm1, %eax
-; AVX1-NEXT: andl $31, %eax
-; AVX1-NEXT: movzbl (%rsp,%rax), %eax
-; AVX1-NEXT: vpinsrb $15, %eax, %xmm2, %xmm1
-; AVX1-NEXT: vinsertf128 $1, %xmm0, %ymm1, %ymm0
-; AVX1-NEXT: movq %rbp, %rsp
-; AVX1-NEXT: popq %rbp
-; AVX1-NEXT: retq
-;
-; AVX2-LABEL: var_shuffle_v32i8:
-; AVX2: # %bb.0:
-; AVX2-NEXT: pushq %rbp
-; AVX2-NEXT: movq %rsp, %rbp
-; AVX2-NEXT: andq $-32, %rsp
-; AVX2-NEXT: subq $64, %rsp
-; AVX2-NEXT: vextracti128 $1, %ymm1, %xmm2
-; AVX2-NEXT: vpextrb $0, %xmm2, %eax
-; AVX2-NEXT: vmovaps %ymm0, (%rsp)
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: movzbl (%rsp,%rax), %eax
-; AVX2-NEXT: vmovd %eax, %xmm0
-; AVX2-NEXT: vpextrb $1, %xmm2, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: movzbl (%rsp,%rax), %eax
-; AVX2-NEXT: vpinsrb $1, %eax, %xmm0, %xmm0
-; AVX2-NEXT: vpextrb $2, %xmm2, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: movzbl (%rsp,%rax), %eax
-; AVX2-NEXT: vpinsrb $2, %eax, %xmm0, %xmm0
-; AVX2-NEXT: vpextrb $3, %xmm2, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: movzbl (%rsp,%rax), %eax
-; AVX2-NEXT: vpinsrb $3, %eax, %xmm0, %xmm0
-; AVX2-NEXT: vpextrb $4, %xmm2, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: movzbl (%rsp,%rax), %eax
-; AVX2-NEXT: vpinsrb $4, %eax, %xmm0, %xmm0
-; AVX2-NEXT: vpextrb $5, %xmm2, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: movzbl (%rsp,%rax), %eax
-; AVX2-NEXT: vpinsrb $5, %eax, %xmm0, %xmm0
-; AVX2-NEXT: vpextrb $6, %xmm2, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: movzbl (%rsp,%rax), %eax
-; AVX2-NEXT: vpinsrb $6, %eax, %xmm0, %xmm0
-; AVX2-NEXT: vpextrb $7, %xmm2, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: movzbl (%rsp,%rax), %eax
-; AVX2-NEXT: vpinsrb $7, %eax, %xmm0, %xmm0
-; AVX2-NEXT: vpextrb $8, %xmm2, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: movzbl (%rsp,%rax), %eax
-; AVX2-NEXT: vpinsrb $8, %eax, %xmm0, %xmm0
-; AVX2-NEXT: vpextrb $9, %xmm2, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: movzbl (%rsp,%rax), %eax
-; AVX2-NEXT: vpinsrb $9, %eax, %xmm0, %xmm0
-; AVX2-NEXT: vpextrb $10, %xmm2, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: movzbl (%rsp,%rax), %eax
-; AVX2-NEXT: vpinsrb $10, %eax, %xmm0, %xmm0
-; AVX2-NEXT: vpextrb $11, %xmm2, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: movzbl (%rsp,%rax), %eax
-; AVX2-NEXT: vpinsrb $11, %eax, %xmm0, %xmm0
-; AVX2-NEXT: vpextrb $12, %xmm2, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: movzbl (%rsp,%rax), %eax
-; AVX2-NEXT: vpinsrb $12, %eax, %xmm0, %xmm0
-; AVX2-NEXT: vpextrb $13, %xmm2, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: movzbl (%rsp,%rax), %eax
-; AVX2-NEXT: vpinsrb $13, %eax, %xmm0, %xmm0
-; AVX2-NEXT: vpextrb $14, %xmm2, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: movzbl (%rsp,%rax), %eax
-; AVX2-NEXT: vpinsrb $14, %eax, %xmm0, %xmm0
-; AVX2-NEXT: vpextrb $15, %xmm2, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: movzbl (%rsp,%rax), %eax
-; AVX2-NEXT: vpinsrb $15, %eax, %xmm0, %xmm0
-; AVX2-NEXT: vpextrb $0, %xmm1, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: movzbl (%rsp,%rax), %eax
-; AVX2-NEXT: vmovd %eax, %xmm2
-; AVX2-NEXT: vpextrb $1, %xmm1, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: vpinsrb $1, (%rsp,%rax), %xmm2, %xmm2
-; AVX2-NEXT: vpextrb $2, %xmm1, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: vpinsrb $2, (%rsp,%rax), %xmm2, %xmm2
-; AVX2-NEXT: vpextrb $3, %xmm1, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: vpinsrb $3, (%rsp,%rax), %xmm2, %xmm2
-; AVX2-NEXT: vpextrb $4, %xmm1, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: vpinsrb $4, (%rsp,%rax), %xmm2, %xmm2
-; AVX2-NEXT: vpextrb $5, %xmm1, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: vpinsrb $5, (%rsp,%rax), %xmm2, %xmm2
-; AVX2-NEXT: vpextrb $6, %xmm1, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: vpinsrb $6, (%rsp,%rax), %xmm2, %xmm2
-; AVX2-NEXT: vpextrb $7, %xmm1, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: vpinsrb $7, (%rsp,%rax), %xmm2, %xmm2
-; AVX2-NEXT: vpextrb $8, %xmm1, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: vpinsrb $8, (%rsp,%rax), %xmm2, %xmm2
-; AVX2-NEXT: vpextrb $9, %xmm1, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: vpinsrb $9, (%rsp,%rax), %xmm2, %xmm2
-; AVX2-NEXT: vpextrb $10, %xmm1, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: vpinsrb $10, (%rsp,%rax), %xmm2, %xmm2
-; AVX2-NEXT: vpextrb $11, %xmm1, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: vpinsrb $11, (%rsp,%rax), %xmm2, %xmm2
-; AVX2-NEXT: vpextrb $12, %xmm1, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: vpinsrb $12, (%rsp,%rax), %xmm2, %xmm2
-; AVX2-NEXT: vpextrb $13, %xmm1, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: vpinsrb $13, (%rsp,%rax), %xmm2, %xmm2
-; AVX2-NEXT: vpextrb $14, %xmm1, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: vpinsrb $14, (%rsp,%rax), %xmm2, %xmm2
-; AVX2-NEXT: vpextrb $15, %xmm1, %eax
-; AVX2-NEXT: andl $31, %eax
-; AVX2-NEXT: movzbl (%rsp,%rax), %eax
-; AVX2-NEXT: vpinsrb $15, %eax, %xmm2, %xmm1
-; AVX2-NEXT: vinserti128 $1, %xmm0, %ymm1, %ymm0
-; AVX2-NEXT: movq %rbp, %rsp
-; AVX2-NEXT: popq %rbp
-; AVX2-NEXT: retq
-;
-; AVX512F-LABEL: var_shuffle_v32i8:
-; AVX512F: # %bb.0:
-; AVX512F-NEXT: pushq %rbp
-; AVX512F-NEXT: movq %rsp, %rbp
-; AVX512F-NEXT: andq $-32, %rsp
-; AVX512F-NEXT: subq $64, %rsp
-; AVX512F-NEXT: vextracti128 $1, %ymm1, %xmm2
-; AVX512F-NEXT: vpextrb $0, %xmm2, %eax
-; AVX512F-NEXT: vmovaps %ymm0, (%rsp)
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512F-NEXT: vmovd %eax, %xmm0
-; AVX512F-NEXT: vpextrb $1, %xmm2, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512F-NEXT: vpinsrb $1, %eax, %xmm0, %xmm0
-; AVX512F-NEXT: vpextrb $2, %xmm2, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512F-NEXT: vpinsrb $2, %eax, %xmm0, %xmm0
-; AVX512F-NEXT: vpextrb $3, %xmm2, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512F-NEXT: vpinsrb $3, %eax, %xmm0, %xmm0
-; AVX512F-NEXT: vpextrb $4, %xmm2, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512F-NEXT: vpinsrb $4, %eax, %xmm0, %xmm0
-; AVX512F-NEXT: vpextrb $5, %xmm2, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512F-NEXT: vpinsrb $5, %eax, %xmm0, %xmm0
-; AVX512F-NEXT: vpextrb $6, %xmm2, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512F-NEXT: vpinsrb $6, %eax, %xmm0, %xmm0
-; AVX512F-NEXT: vpextrb $7, %xmm2, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512F-NEXT: vpinsrb $7, %eax, %xmm0, %xmm0
-; AVX512F-NEXT: vpextrb $8, %xmm2, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512F-NEXT: vpinsrb $8, %eax, %xmm0, %xmm0
-; AVX512F-NEXT: vpextrb $9, %xmm2, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512F-NEXT: vpinsrb $9, %eax, %xmm0, %xmm0
-; AVX512F-NEXT: vpextrb $10, %xmm2, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512F-NEXT: vpinsrb $10, %eax, %xmm0, %xmm0
-; AVX512F-NEXT: vpextrb $11, %xmm2, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512F-NEXT: vpinsrb $11, %eax, %xmm0, %xmm0
-; AVX512F-NEXT: vpextrb $12, %xmm2, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512F-NEXT: vpinsrb $12, %eax, %xmm0, %xmm0
-; AVX512F-NEXT: vpextrb $13, %xmm2, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512F-NEXT: vpinsrb $13, %eax, %xmm0, %xmm0
-; AVX512F-NEXT: vpextrb $14, %xmm2, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512F-NEXT: vpinsrb $14, %eax, %xmm0, %xmm0
-; AVX512F-NEXT: vpextrb $15, %xmm2, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512F-NEXT: vpinsrb $15, %eax, %xmm0, %xmm0
-; AVX512F-NEXT: vpextrb $0, %xmm1, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512F-NEXT: vmovd %eax, %xmm2
-; AVX512F-NEXT: vpextrb $1, %xmm1, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: vpinsrb $1, (%rsp,%rax), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrb $2, %xmm1, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: vpinsrb $2, (%rsp,%rax), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrb $3, %xmm1, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: vpinsrb $3, (%rsp,%rax), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrb $4, %xmm1, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: vpinsrb $4, (%rsp,%rax), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrb $5, %xmm1, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: vpinsrb $5, (%rsp,%rax), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrb $6, %xmm1, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: vpinsrb $6, (%rsp,%rax), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrb $7, %xmm1, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: vpinsrb $7, (%rsp,%rax), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrb $8, %xmm1, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: vpinsrb $8, (%rsp,%rax), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrb $9, %xmm1, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: vpinsrb $9, (%rsp,%rax), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrb $10, %xmm1, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: vpinsrb $10, (%rsp,%rax), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrb $11, %xmm1, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: vpinsrb $11, (%rsp,%rax), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrb $12, %xmm1, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: vpinsrb $12, (%rsp,%rax), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrb $13, %xmm1, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: vpinsrb $13, (%rsp,%rax), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrb $14, %xmm1, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: vpinsrb $14, (%rsp,%rax), %xmm2, %xmm2
-; AVX512F-NEXT: vpextrb $15, %xmm1, %eax
-; AVX512F-NEXT: andl $31, %eax
-; AVX512F-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512F-NEXT: vpinsrb $15, %eax, %xmm2, %xmm1
-; AVX512F-NEXT: vinserti128 $1, %xmm0, %ymm1, %ymm0
-; AVX512F-NEXT: movq %rbp, %rsp
-; AVX512F-NEXT: popq %rbp
-; AVX512F-NEXT: retq
-;
-; AVX512VL-LABEL: var_shuffle_v32i8:
-; AVX512VL: # %bb.0:
-; AVX512VL-NEXT: pushq %rbp
-; AVX512VL-NEXT: movq %rsp, %rbp
-; AVX512VL-NEXT: andq $-32, %rsp
-; AVX512VL-NEXT: subq $64, %rsp
-; AVX512VL-NEXT: vextracti128 $1, %ymm1, %xmm2
-; AVX512VL-NEXT: vpextrb $0, %xmm2, %eax
-; AVX512VL-NEXT: vmovaps %ymm0, (%rsp)
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512VL-NEXT: vmovd %eax, %xmm0
-; AVX512VL-NEXT: vpextrb $1, %xmm2, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512VL-NEXT: vpinsrb $1, %eax, %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrb $2, %xmm2, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512VL-NEXT: vpinsrb $2, %eax, %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrb $3, %xmm2, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512VL-NEXT: vpinsrb $3, %eax, %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrb $4, %xmm2, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512VL-NEXT: vpinsrb $4, %eax, %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrb $5, %xmm2, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512VL-NEXT: vpinsrb $5, %eax, %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrb $6, %xmm2, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512VL-NEXT: vpinsrb $6, %eax, %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrb $7, %xmm2, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512VL-NEXT: vpinsrb $7, %eax, %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrb $8, %xmm2, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512VL-NEXT: vpinsrb $8, %eax, %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrb $9, %xmm2, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512VL-NEXT: vpinsrb $9, %eax, %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrb $10, %xmm2, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512VL-NEXT: vpinsrb $10, %eax, %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrb $11, %xmm2, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512VL-NEXT: vpinsrb $11, %eax, %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrb $12, %xmm2, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512VL-NEXT: vpinsrb $12, %eax, %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrb $13, %xmm2, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512VL-NEXT: vpinsrb $13, %eax, %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrb $14, %xmm2, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512VL-NEXT: vpinsrb $14, %eax, %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrb $15, %xmm2, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512VL-NEXT: vpinsrb $15, %eax, %xmm0, %xmm0
-; AVX512VL-NEXT: vpextrb $0, %xmm1, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512VL-NEXT: vmovd %eax, %xmm2
-; AVX512VL-NEXT: vpextrb $1, %xmm1, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: vpinsrb $1, (%rsp,%rax), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrb $2, %xmm1, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: vpinsrb $2, (%rsp,%rax), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrb $3, %xmm1, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: vpinsrb $3, (%rsp,%rax), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrb $4, %xmm1, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: vpinsrb $4, (%rsp,%rax), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrb $5, %xmm1, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: vpinsrb $5, (%rsp,%rax), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrb $6, %xmm1, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: vpinsrb $6, (%rsp,%rax), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrb $7, %xmm1, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: vpinsrb $7, (%rsp,%rax), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrb $8, %xmm1, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: vpinsrb $8, (%rsp,%rax), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrb $9, %xmm1, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: vpinsrb $9, (%rsp,%rax), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrb $10, %xmm1, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: vpinsrb $10, (%rsp,%rax), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrb $11, %xmm1, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: vpinsrb $11, (%rsp,%rax), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrb $12, %xmm1, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: vpinsrb $12, (%rsp,%rax), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrb $13, %xmm1, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: vpinsrb $13, (%rsp,%rax), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrb $14, %xmm1, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: vpinsrb $14, (%rsp,%rax), %xmm2, %xmm2
-; AVX512VL-NEXT: vpextrb $15, %xmm1, %eax
-; AVX512VL-NEXT: andl $31, %eax
-; AVX512VL-NEXT: movzbl (%rsp,%rax), %eax
-; AVX512VL-NEXT: vpinsrb $15, %eax, %xmm2, %xmm1
-; AVX512VL-NEXT: vinserti128 $1, %xmm0, %ymm1, %ymm0
-; AVX512VL-NEXT: movq %rbp, %rsp
-; AVX512VL-NEXT: popq %rbp
-; AVX512VL-NEXT: retq
-;
-; VBMI-LABEL: var_shuffle_v32i8:
-; VBMI: # %bb.0:
-; VBMI-NEXT: vpermb %ymm0, %ymm1, %ymm0
-; VBMI-NEXT: retq
- %index0 = extractelement <32 x i8> %indices, i32 0
- %index1 = extractelement <32 x i8> %indices, i32 1
- %index2 = extractelement <32 x i8> %indices, i32 2
- %index3 = extractelement <32 x i8> %indices, i32 3
- %index4 = extractelement <32 x i8> %indices, i32 4
- %index5 = extractelement <32 x i8> %indices, i32 5
- %index6 = extractelement <32 x i8> %indices, i32 6
- %index7 = extractelement <32 x i8> %indices, i32 7
- %index8 = extractelement <32 x i8> %indices, i32 8
- %index9 = extractelement <32 x i8> %indices, i32 9
- %index10 = extractelement <32 x i8> %indices, i32 10
- %index11 = extractelement <32 x i8> %indices, i32 11
- %index12 = extractelement <32 x i8> %indices, i32 12
- %index13 = extractelement <32 x i8> %indices, i32 13
- %index14 = extractelement <32 x i8> %indices, i32 14
- %index15 = extractelement <32 x i8> %indices, i32 15
- %index16 = extractelement <32 x i8> %indices, i32 16
- %index17 = extractelement <32 x i8> %indices, i32 17
- %index18 = extractelement <32 x i8> %indices, i32 18
- %index19 = extractelement <32 x i8> %indices, i32 19
- %index20 = extractelement <32 x i8> %indices, i32 20
- %index21 = extractelement <32 x i8> %indices, i32 21
- %index22 = extractelement <32 x i8> %indices, i32 22
- %index23 = extractelement <32 x i8> %indices, i32 23
- %index24 = extractelement <32 x i8> %indices, i32 24
- %index25 = extractelement <32 x i8> %indices, i32 25
- %index26 = extractelement <32 x i8> %indices, i32 26
- %index27 = extractelement <32 x i8> %indices, i32 27
- %index28 = extractelement <32 x i8> %indices, i32 28
- %index29 = extractelement <32 x i8> %indices, i32 29
- %index30 = extractelement <32 x i8> %indices, i32 30
- %index31 = extractelement <32 x i8> %indices, i32 31
- %v0 = extractelement <32 x i8> %v, i8 %index0
- %v1 = extractelement <32 x i8> %v, i8 %index1
- %v2 = extractelement <32 x i8> %v, i8 %index2
- %v3 = extractelement <32 x i8> %v, i8 %index3
- %v4 = extractelement <32 x i8> %v, i8 %index4
- %v5 = extractelement <32 x i8> %v, i8 %index5
- %v6 = extractelement <32 x i8> %v, i8 %index6
- %v7 = extractelement <32 x i8> %v, i8 %index7
- %v8 = extractelement <32 x i8> %v, i8 %index8
- %v9 = extractelement <32 x i8> %v, i8 %index9
- %v10 = extractelement <32 x i8> %v, i8 %index10
- %v11 = extractelement <32 x i8> %v, i8 %index11
- %v12 = extractelement <32 x i8> %v, i8 %index12
- %v13 = extractelement <32 x i8> %v, i8 %index13
- %v14 = extractelement <32 x i8> %v, i8 %index14
- %v15 = extractelement <32 x i8> %v, i8 %index15
- %v16 = extractelement <32 x i8> %v, i8 %index16
- %v17 = extractelement <32 x i8> %v, i8 %index17
- %v18 = extractelement <32 x i8> %v, i8 %index18
- %v19 = extractelement <32 x i8> %v, i8 %index19
- %v20 = extractelement <32 x i8> %v, i8 %index20
- %v21 = extractelement <32 x i8> %v, i8 %index21
- %v22 = extractelement <32 x i8> %v, i8 %index22
- %v23 = extractelement <32 x i8> %v, i8 %index23
- %v24 = extractelement <32 x i8> %v, i8 %index24
- %v25 = extractelement <32 x i8> %v, i8 %index25
- %v26 = extractelement <32 x i8> %v, i8 %index26
- %v27 = extractelement <32 x i8> %v, i8 %index27
- %v28 = extractelement <32 x i8> %v, i8 %index28
- %v29 = extractelement <32 x i8> %v, i8 %index29
- %v30 = extractelement <32 x i8> %v, i8 %index30
- %v31 = extractelement <32 x i8> %v, i8 %index31
- %ret0 = insertelement <32 x i8> undef, i8 %v0, i32 0
- %ret1 = insertelement <32 x i8> %ret0, i8 %v1, i32 1
- %ret2 = insertelement <32 x i8> %ret1, i8 %v2, i32 2
- %ret3 = insertelement <32 x i8> %ret2, i8 %v3, i32 3
- %ret4 = insertelement <32 x i8> %ret3, i8 %v4, i32 4
- %ret5 = insertelement <32 x i8> %ret4, i8 %v5, i32 5
- %ret6 = insertelement <32 x i8> %ret5, i8 %v6, i32 6
- %ret7 = insertelement <32 x i8> %ret6, i8 %v7, i32 7
- %ret8 = insertelement <32 x i8> %ret7, i8 %v8, i32 8
- %ret9 = insertelement <32 x i8> %ret8, i8 %v9, i32 9
- %ret10 = insertelement <32 x i8> %ret9, i8 %v10, i32 10
- %ret11 = insertelement <32 x i8> %ret10, i8 %v11, i32 11
- %ret12 = insertelement <32 x i8> %ret11, i8 %v12, i32 12
- %ret13 = insertelement <32 x i8> %ret12, i8 %v13, i32 13
- %ret14 = insertelement <32 x i8> %ret13, i8 %v14, i32 14
- %ret15 = insertelement <32 x i8> %ret14, i8 %v15, i32 15
- %ret16 = insertelement <32 x i8> %ret15, i8 %v16, i32 16
- %ret17 = insertelement <32 x i8> %ret16, i8 %v17, i32 17
- %ret18 = insertelement <32 x i8> %ret17, i8 %v18, i32 18
- %ret19 = insertelement <32 x i8> %ret18, i8 %v19, i32 19
- %ret20 = insertelement <32 x i8> %ret19, i8 %v20, i32 20
- %ret21 = insertelement <32 x i8> %ret20, i8 %v21, i32 21
- %ret22 = insertelement <32 x i8> %ret21, i8 %v22, i32 22
- %ret23 = insertelement <32 x i8> %ret22, i8 %v23, i32 23
- %ret24 = insertelement <32 x i8> %ret23, i8 %v24, i32 24
- %ret25 = insertelement <32 x i8> %ret24, i8 %v25, i32 25
- %ret26 = insertelement <32 x i8> %ret25, i8 %v26, i32 26
- %ret27 = insertelement <32 x i8> %ret26, i8 %v27, i32 27
- %ret28 = insertelement <32 x i8> %ret27, i8 %v28, i32 28
- %ret29 = insertelement <32 x i8> %ret28, i8 %v29, i32 29
- %ret30 = insertelement <32 x i8> %ret29, i8 %v30, i32 30
- %ret31 = insertelement <32 x i8> %ret30, i8 %v31, i32 31
- ret <32 x i8> %ret31
-}
-
-define <4 x double> @var_shuffle_v4f64(<4 x double> %v, <4 x i64> %indices) nounwind {
-; AVX1-LABEL: var_shuffle_v4f64:
-; AVX1: # %bb.0:
-; AVX1-NEXT: pushq %rbp
-; AVX1-NEXT: movq %rsp, %rbp
-; AVX1-NEXT: andq $-32, %rsp
-; AVX1-NEXT: subq $64, %rsp
-; AVX1-NEXT: vmovq %xmm1, %rax
-; AVX1-NEXT: andl $3, %eax
-; AVX1-NEXT: vpextrq $1, %xmm1, %rcx
-; AVX1-NEXT: andl $3, %ecx
-; AVX1-NEXT: vextractf128 $1, %ymm1, %xmm1
-; AVX1-NEXT: vmovq %xmm1, %rdx
-; AVX1-NEXT: andl $3, %edx
-; AVX1-NEXT: vpextrq $1, %xmm1, %rsi
-; AVX1-NEXT: andl $3, %esi
-; AVX1-NEXT: vmovaps %ymm0, (%rsp)
-; AVX1-NEXT: vmovsd {{.*#+}} xmm0 = mem[0],zero
-; AVX1-NEXT: vmovhpd {{.*#+}} xmm0 = xmm0[0],mem[0]
-; AVX1-NEXT: vmovsd {{.*#+}} xmm1 = mem[0],zero
-; AVX1-NEXT: vmovhpd {{.*#+}} xmm1 = xmm1[0],mem[0]
-; AVX1-NEXT: vinsertf128 $1, %xmm0, %ymm1, %ymm0
-; AVX1-NEXT: movq %rbp, %rsp
-; AVX1-NEXT: popq %rbp
-; AVX1-NEXT: retq
-;
-; AVX2-LABEL: var_shuffle_v4f64:
-; AVX2: # %bb.0:
-; AVX2-NEXT: pushq %rbp
-; AVX2-NEXT: movq %rsp, %rbp
-; AVX2-NEXT: andq $-32, %rsp
-; AVX2-NEXT: subq $64, %rsp
-; AVX2-NEXT: vmovq %xmm1, %rax
-; AVX2-NEXT: andl $3, %eax
-; AVX2-NEXT: vpextrq $1, %xmm1, %rcx
-; AVX2-NEXT: andl $3, %ecx
-; AVX2-NEXT: vextracti128 $1, %ymm1, %xmm1
-; AVX2-NEXT: vmovq %xmm1, %rdx
-; AVX2-NEXT: andl $3, %edx
-; AVX2-NEXT: vpextrq $1, %xmm1, %rsi
-; AVX2-NEXT: andl $3, %esi
-; AVX2-NEXT: vmovaps %ymm0, (%rsp)
-; AVX2-NEXT: vmovsd {{.*#+}} xmm0 = mem[0],zero
-; AVX2-NEXT: vmovhpd {{.*#+}} xmm0 = xmm0[0],mem[0]
-; AVX2-NEXT: vmovsd {{.*#+}} xmm1 = mem[0],zero
-; AVX2-NEXT: vmovhpd {{.*#+}} xmm1 = xmm1[0],mem[0]
-; AVX2-NEXT: vinsertf128 $1, %xmm0, %ymm1, %ymm0
-; AVX2-NEXT: movq %rbp, %rsp
-; AVX2-NEXT: popq %rbp
-; AVX2-NEXT: retq
-;
-; AVX512F-LABEL: var_shuffle_v4f64:
-; AVX512F: # %bb.0:
-; AVX512F-NEXT: pushq %rbp
-; AVX512F-NEXT: movq %rsp, %rbp
-; AVX512F-NEXT: andq $-32, %rsp
-; AVX512F-NEXT: subq $64, %rsp
-; AVX512F-NEXT: vmovq %xmm1, %rax
-; AVX512F-NEXT: andl $3, %eax
-; AVX512F-NEXT: vpextrq $1, %xmm1, %rcx
-; AVX512F-NEXT: andl $3, %ecx
-; AVX512F-NEXT: vextracti128 $1, %ymm1, %xmm1
-; AVX512F-NEXT: vmovq %xmm1, %rdx
-; AVX512F-NEXT: andl $3, %edx
-; AVX512F-NEXT: vpextrq $1, %xmm1, %rsi
-; AVX512F-NEXT: andl $3, %esi
-; AVX512F-NEXT: vmovaps %ymm0, (%rsp)
-; AVX512F-NEXT: vmovsd {{.*#+}} xmm0 = mem[0],zero
-; AVX512F-NEXT: vmovhpd {{.*#+}} xmm0 = xmm0[0],mem[0]
-; AVX512F-NEXT: vmovsd {{.*#+}} xmm1 = mem[0],zero
-; AVX512F-NEXT: vmovhpd {{.*#+}} xmm1 = xmm1[0],mem[0]
-; AVX512F-NEXT: vinsertf128 $1, %xmm0, %ymm1, %ymm0
-; AVX512F-NEXT: movq %rbp, %rsp
-; AVX512F-NEXT: popq %rbp
-; AVX512F-NEXT: retq
-;
-; AVX512VL-LABEL: var_shuffle_v4f64:
-; AVX512VL: # %bb.0:
-; AVX512VL-NEXT: vpermpd %ymm0, %ymm1, %ymm0
-; AVX512VL-NEXT: retq
-;
-; AVX512VLBW-LABEL: var_shuffle_v4f64:
-; AVX512VLBW: # %bb.0:
-; AVX512VLBW-NEXT: vpermpd %ymm0, %ymm1, %ymm0
-; AVX512VLBW-NEXT: retq
- %index0 = extractelement <4 x i64> %indices, i32 0
- %index1 = extractelement <4 x i64> %indices, i32 1
- %index2 = extractelement <4 x i64> %indices, i32 2
- %index3 = extractelement <4 x i64> %indices, i32 3
- %v0 = extractelement <4 x double> %v, i64 %index0
- %v1 = extractelement <4 x double> %v, i64 %index1
- %v2 = extractelement <4 x double> %v, i64 %index2
- %v3 = extractelement <4 x double> %v, i64 %index3
- %ret0 = insertelement <4 x double> undef, double %v0, i32 0
- %ret1 = insertelement <4 x double> %ret0, double %v1, i32 1
- %ret2 = insertelement <4 x double> %ret1, double %v2, i32 2
- %ret3 = insertelement <4 x double> %ret2, double %v3, i32 3
- ret <4 x double> %ret3
-}
-
-define <8 x float> @var_shuffle_v8f32(<8 x float> %v, <8 x i32> %indices) nounwind {
-; AVX1-LABEL: var_shuffle_v8f32:
-; AVX1: # %bb.0:
-; AVX1-NEXT: pushq %rbp
-; AVX1-NEXT: movq %rsp, %rbp
-; AVX1-NEXT: andq $-32, %rsp
-; AVX1-NEXT: subq $64, %rsp
-; AVX1-NEXT: vpextrq $1, %xmm1, %r8
-; AVX1-NEXT: movq %r8, %rcx
-; AVX1-NEXT: shrq $30, %rcx
-; AVX1-NEXT: vmovq %xmm1, %r9
-; AVX1-NEXT: movq %r9, %rdx
-; AVX1-NEXT: shrq $30, %rdx
-; AVX1-NEXT: vextractf128 $1, %ymm1, %xmm1
-; AVX1-NEXT: vpextrq $1, %xmm1, %r10
-; AVX1-NEXT: movq %r10, %rdi
-; AVX1-NEXT: shrq $30, %rdi
-; AVX1-NEXT: vmovq %xmm1, %rax
-; AVX1-NEXT: movq %rax, %rsi
-; AVX1-NEXT: shrq $30, %rsi
-; AVX1-NEXT: vmovaps %ymm0, (%rsp)
-; AVX1-NEXT: andl $7, %r9d
-; AVX1-NEXT: andl $28, %edx
-; AVX1-NEXT: andl $7, %r8d
-; AVX1-NEXT: andl $28, %ecx
-; AVX1-NEXT: andl $7, %eax
-; AVX1-NEXT: andl $28, %esi
-; AVX1-NEXT: andl $7, %r10d
-; AVX1-NEXT: andl $28, %edi
-; AVX1-NEXT: vmovss {{.*#+}} xmm0 = mem[0],zero,zero,zero
-; AVX1-NEXT: vinsertps {{.*#+}} xmm0 = xmm0[0],mem[0],xmm0[2,3]
-; AVX1-NEXT: vinsertps {{.*#+}} xmm0 = xmm0[0,1],mem[0],xmm0[3]
-; AVX1-NEXT: vinsertps {{.*#+}} xmm0 = xmm0[0,1,2],mem[0]
-; AVX1-NEXT: vmovss {{.*#+}} xmm1 = mem[0],zero,zero,zero
-; AVX1-NEXT: vinsertps {{.*#+}} xmm1 = xmm1[0],mem[0],xmm1[2,3]
-; AVX1-NEXT: vinsertps {{.*#+}} xmm1 = xmm1[0,1],mem[0],xmm1[3]
-; AVX1-NEXT: vinsertps {{.*#+}} xmm1 = xmm1[0,1,2],mem[0]
-; AVX1-NEXT: vinsertf128 $1, %xmm0, %ymm1, %ymm0
-; AVX1-NEXT: movq %rbp, %rsp
-; AVX1-NEXT: popq %rbp
-; AVX1-NEXT: retq
-;
-; INT256-LABEL: var_shuffle_v8f32:
-; INT256: # %bb.0:
-; INT256-NEXT: vpermps %ymm0, %ymm1, %ymm0
-; INT256-NEXT: retq
- %index0 = extractelement <8 x i32> %indices, i32 0
- %index1 = extractelement <8 x i32> %indices, i32 1
- %index2 = extractelement <8 x i32> %indices, i32 2
- %index3 = extractelement <8 x i32> %indices, i32 3
- %index4 = extractelement <8 x i32> %indices, i32 4
- %index5 = extractelement <8 x i32> %indices, i32 5
- %index6 = extractelement <8 x i32> %indices, i32 6
- %index7 = extractelement <8 x i32> %indices, i32 7
- %v0 = extractelement <8 x float> %v, i32 %index0
- %v1 = extractelement <8 x float> %v, i32 %index1
- %v2 = extractelement <8 x float> %v, i32 %index2
- %v3 = extractelement <8 x float> %v, i32 %index3
- %v4 = extractelement <8 x float> %v, i32 %index4
- %v5 = extractelement <8 x float> %v, i32 %index5
- %v6 = extractelement <8 x float> %v, i32 %index6
- %v7 = extractelement <8 x float> %v, i32 %index7
- %ret0 = insertelement <8 x float> undef, float %v0, i32 0
- %ret1 = insertelement <8 x float> %ret0, float %v1, i32 1
- %ret2 = insertelement <8 x float> %ret1, float %v2, i32 2
- %ret3 = insertelement <8 x float> %ret2, float %v3, i32 3
- %ret4 = insertelement <8 x float> %ret3, float %v4, i32 4
- %ret5 = insertelement <8 x float> %ret4, float %v5, i32 5
- %ret6 = insertelement <8 x float> %ret5, float %v6, i32 6
- %ret7 = insertelement <8 x float> %ret6, float %v7, i32 7
- ret <8 x float> %ret7
-}
-
-define <8 x i32> @pr35820(<4 x i32> %v, <8 x i32> %indices) unnamed_addr nounwind {
-; AVX1-LABEL: pr35820:
-; AVX1: # %bb.0: # %entry
-; AVX1-NEXT: vpextrq $1, %xmm1, %r8
-; AVX1-NEXT: movq %r8, %r10
-; AVX1-NEXT: shrq $30, %r10
-; AVX1-NEXT: vmovq %xmm1, %r9
-; AVX1-NEXT: movq %r9, %rsi
-; AVX1-NEXT: shrq $30, %rsi
-; AVX1-NEXT: vmovaps %xmm0, -{{[0-9]+}}(%rsp)
-; AVX1-NEXT: andl $3, %r9d
-; AVX1-NEXT: andl $12, %esi
-; AVX1-NEXT: andl $3, %r8d
-; AVX1-NEXT: andl $12, %r10d
-; AVX1-NEXT: vextractf128 $1, %ymm1, %xmm0
-; AVX1-NEXT: vpextrq $1, %xmm0, %rax
-; AVX1-NEXT: movq %rax, %rdi
-; AVX1-NEXT: shrq $30, %rdi
-; AVX1-NEXT: vmovq %xmm0, %rcx
-; AVX1-NEXT: movq %rcx, %rdx
-; AVX1-NEXT: shrq $30, %rdx
-; AVX1-NEXT: andl $3, %ecx
-; AVX1-NEXT: andl $12, %edx
-; AVX1-NEXT: andl $3, %eax
-; AVX1-NEXT: andl $12, %edi
-; AVX1-NEXT: vmovd {{.*#+}} xmm0 = mem[0],zero,zero,zero
-; AVX1-NEXT: vpinsrd $1, -24(%rsp,%rdx), %xmm0, %xmm0
-; AVX1-NEXT: vpinsrd $2, -24(%rsp,%rax,4), %xmm0, %xmm0
-; AVX1-NEXT: vpinsrd $3, -24(%rsp,%rdi), %xmm0, %xmm0
-; AVX1-NEXT: vmovd {{.*#+}} xmm1 = mem[0],zero,zero,zero
-; AVX1-NEXT: vpinsrd $1, -24(%rsp,%rsi), %xmm1, %xmm1
-; AVX1-NEXT: vpinsrd $2, -24(%rsp,%r8,4), %xmm1, %xmm1
-; AVX1-NEXT: vpinsrd $3, -24(%rsp,%r10), %xmm1, %xmm1
-; AVX1-NEXT: vinsertf128 $1, %xmm0, %ymm1, %ymm0
-; AVX1-NEXT: retq
-;
-; INT256-LABEL: pr35820:
-; INT256: # %bb.0: # %entry
-; INT256-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; INT256-NEXT: vpermps %ymm0, %ymm1, %ymm0
-; INT256-NEXT: retq
-entry:
- %tmp1 = extractelement <8 x i32> %indices, i32 0
- %vecext2.8 = extractelement <4 x i32> %v, i32 %tmp1
- %tmp2 = extractelement <8 x i32> %indices, i32 1
- %vecext2.9 = extractelement <4 x i32> %v, i32 %tmp2
- %tmp3 = extractelement <8 x i32> %indices, i32 2
- %vecext2.10 = extractelement <4 x i32> %v, i32 %tmp3
- %tmp4 = extractelement <8 x i32> %indices, i32 3
- %vecext2.11 = extractelement <4 x i32> %v, i32 %tmp4
- %tmp5 = extractelement <8 x i32> %indices, i32 4
- %vecext2.12 = extractelement <4 x i32> %v, i32 %tmp5
- %tmp6 = extractelement <8 x i32> %indices, i32 5
- %vecext2.13 = extractelement <4 x i32> %v, i32 %tmp6
- %tmp7 = extractelement <8 x i32> %indices, i32 6
- %vecext2.14 = extractelement <4 x i32> %v, i32 %tmp7
- %tmp8 = extractelement <8 x i32> %indices, i32 7
- %vecext2.15 = extractelement <4 x i32> %v, i32 %tmp8
- %tmp9 = insertelement <8 x i32> undef, i32 %vecext2.8, i32 0
- %tmp10 = insertelement <8 x i32> %tmp9, i32 %vecext2.9, i32 1
- %tmp11 = insertelement <8 x i32> %tmp10, i32 %vecext2.10, i32 2
- %tmp12 = insertelement <8 x i32> %tmp11, i32 %vecext2.11, i32 3
- %tmp13 = insertelement <8 x i32> %tmp12, i32 %vecext2.12, i32 4
- %tmp14 = insertelement <8 x i32> %tmp13, i32 %vecext2.13, i32 5
- %tmp15 = insertelement <8 x i32> %tmp14, i32 %vecext2.14, i32 6
- %tmp16 = insertelement <8 x i32> %tmp15, i32 %vecext2.15, i32 7
- ret <8 x i32> %tmp16
-}
-
-define <8 x float> @pr35820_float(<4 x float> %v, <8 x i32> %indices) unnamed_addr nounwind {
-; AVX1-LABEL: pr35820_float:
-; AVX1: # %bb.0: # %entry
-; AVX1-NEXT: vpextrq $1, %xmm1, %r8
-; AVX1-NEXT: movq %r8, %r10
-; AVX1-NEXT: shrq $30, %r10
-; AVX1-NEXT: vmovq %xmm1, %r9
-; AVX1-NEXT: movq %r9, %rdx
-; AVX1-NEXT: shrq $30, %rdx
-; AVX1-NEXT: vmovaps %xmm0, -{{[0-9]+}}(%rsp)
-; AVX1-NEXT: andl $3, %r9d
-; AVX1-NEXT: andl $12, %edx
-; AVX1-NEXT: andl $3, %r8d
-; AVX1-NEXT: andl $12, %r10d
-; AVX1-NEXT: vextractf128 $1, %ymm1, %xmm0
-; AVX1-NEXT: vpextrq $1, %xmm0, %rax
-; AVX1-NEXT: movq %rax, %rdi
-; AVX1-NEXT: shrq $30, %rdi
-; AVX1-NEXT: vmovq %xmm0, %rcx
-; AVX1-NEXT: movq %rcx, %rsi
-; AVX1-NEXT: shrq $30, %rsi
-; AVX1-NEXT: andl $3, %ecx
-; AVX1-NEXT: andl $12, %esi
-; AVX1-NEXT: andl $3, %eax
-; AVX1-NEXT: andl $12, %edi
-; AVX1-NEXT: vmovss {{.*#+}} xmm0 = mem[0],zero,zero,zero
-; AVX1-NEXT: vinsertps {{.*#+}} xmm0 = xmm0[0],mem[0],xmm0[2,3]
-; AVX1-NEXT: vinsertps {{.*#+}} xmm0 = xmm0[0,1],mem[0],xmm0[3]
-; AVX1-NEXT: vinsertps {{.*#+}} xmm0 = xmm0[0,1,2],mem[0]
-; AVX1-NEXT: vmovss {{.*#+}} xmm1 = mem[0],zero,zero,zero
-; AVX1-NEXT: vinsertps {{.*#+}} xmm1 = xmm1[0],mem[0],xmm1[2,3]
-; AVX1-NEXT: vinsertps {{.*#+}} xmm1 = xmm1[0,1],mem[0],xmm1[3]
-; AVX1-NEXT: vinsertps {{.*#+}} xmm1 = xmm1[0,1,2],mem[0]
-; AVX1-NEXT: vinsertf128 $1, %xmm0, %ymm1, %ymm0
-; AVX1-NEXT: retq
-;
-; INT256-LABEL: pr35820_float:
-; INT256: # %bb.0: # %entry
-; INT256-NEXT: # kill: def %xmm0 killed %xmm0 def %ymm0
-; INT256-NEXT: vpermps %ymm0, %ymm1, %ymm0
-; INT256-NEXT: retq
-entry:
- %tmp1 = extractelement <8 x i32> %indices, i32 0
- %vecext2.8 = extractelement <4 x float> %v, i32 %tmp1
- %tmp2 = extractelement <8 x i32> %indices, i32 1
- %vecext2.9 = extractelement <4 x float> %v, i32 %tmp2
- %tmp3 = extractelement <8 x i32> %indices, i32 2
- %vecext2.10 = extractelement <4 x float> %v, i32 %tmp3
- %tmp4 = extractelement <8 x i32> %indices, i32 3
- %vecext2.11 = extractelement <4 x float> %v, i32 %tmp4
- %tmp5 = extractelement <8 x i32> %indices, i32 4
- %vecext2.12 = extractelement <4 x float> %v, i32 %tmp5
- %tmp6 = extractelement <8 x i32> %indices, i32 5
- %vecext2.13 = extractelement <4 x float> %v, i32 %tmp6
- %tmp7 = extractelement <8 x i32> %indices, i32 6
- %vecext2.14 = extractelement <4 x float> %v, i32 %tmp7
- %tmp8 = extractelement <8 x i32> %indices, i32 7
- %vecext2.15 = extractelement <4 x float> %v, i32 %tmp8
- %tmp9 = insertelement <8 x float> undef, float %vecext2.8, i32 0
- %tmp10 = insertelement <8 x float> %tmp9, float %vecext2.9, i32 1
- %tmp11 = insertelement <8 x float> %tmp10, float %vecext2.10, i32 2
- %tmp12 = insertelement <8 x float> %tmp11, float %vecext2.11, i32 3
- %tmp13 = insertelement <8 x float> %tmp12, float %vecext2.12, i32 4
- %tmp14 = insertelement <8 x float> %tmp13, float %vecext2.13, i32 5
- %tmp15 = insertelement <8 x float> %tmp14, float %vecext2.14, i32 6
- %tmp16 = insertelement <8 x float> %tmp15, float %vecext2.15, i32 7
- ret <8 x float> %tmp16
-}
-
-define <4 x i32> @big_source(<8 x i32> %v, <4 x i32> %indices) unnamed_addr nounwind {
-; AVX-LABEL: big_source:
-; AVX: # %bb.0: # %entry
-; AVX-NEXT: pushq %rbp
-; AVX-NEXT: movq %rsp, %rbp
-; AVX-NEXT: andq $-32, %rsp
-; AVX-NEXT: subq $64, %rsp
-; AVX-NEXT: vmovq %xmm1, %rax
-; AVX-NEXT: movq %rax, %rcx
-; AVX-NEXT: shrq $30, %rcx
-; AVX-NEXT: andl $28, %ecx
-; AVX-NEXT: vpextrq $1, %xmm1, %rdx
-; AVX-NEXT: movq %rdx, %rsi
-; AVX-NEXT: sarq $32, %rsi
-; AVX-NEXT: andl $7, %eax
-; AVX-NEXT: andl $7, %edx
-; AVX-NEXT: vmovaps %ymm0, (%rsp)
-; AVX-NEXT: andl $7, %esi
-; AVX-NEXT: vmovd {{.*#+}} xmm0 = mem[0],zero,zero,zero
-; AVX-NEXT: vpinsrd $1, (%rsp,%rcx), %xmm0, %xmm0
-; AVX-NEXT: vpinsrd $2, (%rsp,%rdx,4), %xmm0, %xmm0
-; AVX-NEXT: vpinsrd $3, (%rsp,%rsi,4), %xmm0, %xmm0
-; AVX-NEXT: movq %rbp, %rsp
-; AVX-NEXT: popq %rbp
-; AVX-NEXT: vzeroupper
-; AVX-NEXT: retq
-entry:
- %tmp1 = extractelement <4 x i32> %indices, i32 0
- %vecext2.8 = extractelement <8 x i32> %v, i32 %tmp1
- %tmp2 = extractelement <4 x i32> %indices, i32 1
- %vecext2.9 = extractelement <8 x i32> %v, i32 %tmp2
- %tmp3 = extractelement <4 x i32> %indices, i32 2
- %vecext2.10 = extractelement <8 x i32> %v, i32 %tmp3
- %tmp4 = extractelement <4 x i32> %indices, i32 3
- %vecext2.11 = extractelement <8 x i32> %v, i32 %tmp4
- %tmp9 = insertelement <4 x i32> undef, i32 %vecext2.8, i32 0
- %tmp10 = insertelement <4 x i32> %tmp9, i32 %vecext2.9, i32 1
- %tmp11 = insertelement <4 x i32> %tmp10, i32 %vecext2.10, i32 2
- %tmp12 = insertelement <4 x i32> %tmp11, i32 %vecext2.11, i32 3
- ret <4 x i32> %tmp12
-}
diff --git a/test/CodeGen/X86/vastart-defs-eflags.ll b/test/CodeGen/X86/vastart-defs-eflags.ll
index d0c515089f4..6ef691552aa 100644
--- a/test/CodeGen/X86/vastart-defs-eflags.ll
+++ b/test/CodeGen/X86/vastart-defs-eflags.ll
@@ -1,3 +1,4 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
; RUN: llc %s -o - | FileCheck %s
target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"
@@ -5,10 +6,41 @@ target triple = "x86_64-apple-macosx10.10.0"
; Check that vastart handling doesn't get between testb and je for the branch.
define i32 @check_flag(i32 %flags, ...) nounwind {
+; CHECK-LABEL: check_flag:
+; CHECK: ## %bb.0: ## %entry
+; CHECK-NEXT: subq $56, %rsp
+; CHECK-NEXT: testb %al, %al
+; CHECK-NEXT: je LBB0_2
+; CHECK-NEXT: ## %bb.1: ## %entry
+; CHECK-NEXT: movaps %xmm0, -{{[0-9]+}}(%rsp)
+; CHECK-NEXT: movaps %xmm1, -{{[0-9]+}}(%rsp)
+; CHECK-NEXT: movaps %xmm2, -{{[0-9]+}}(%rsp)
+; CHECK-NEXT: movaps %xmm3, -{{[0-9]+}}(%rsp)
+; CHECK-NEXT: movaps %xmm4, -{{[0-9]+}}(%rsp)
+; CHECK-NEXT: movaps %xmm5, (%rsp)
+; CHECK-NEXT: movaps %xmm6, {{[0-9]+}}(%rsp)
+; CHECK-NEXT: movaps %xmm7, {{[0-9]+}}(%rsp)
+; CHECK-NEXT: LBB0_2: ## %entry
+; CHECK-NEXT: movq %r9, -{{[0-9]+}}(%rsp)
+; CHECK-NEXT: movq %r8, -{{[0-9]+}}(%rsp)
+; CHECK-NEXT: movq %rcx, -{{[0-9]+}}(%rsp)
+; CHECK-NEXT: movq %rdx, -{{[0-9]+}}(%rsp)
+; CHECK-NEXT: movq %rsi, -{{[0-9]+}}(%rsp)
+; CHECK-NEXT: xorl %eax, %eax
+; CHECK-NEXT: testl $512, %edi ## imm = 0x200
+; CHECK-NEXT: je LBB0_4
+; CHECK-NEXT: ## %bb.3: ## %if.then
+; CHECK-NEXT: leaq -{{[0-9]+}}(%rsp), %rax
+; CHECK-NEXT: movq %rax, 16
+; CHECK-NEXT: leaq {{[0-9]+}}(%rsp), %rax
+; CHECK-NEXT: movq %rax, 8
+; CHECK-NEXT: movl $48, 4
+; CHECK-NEXT: movl $8, 0
+; CHECK-NEXT: movl $1, %eax
+; CHECK-NEXT: LBB0_4: ## %if.end
+; CHECK-NEXT: addq $56, %rsp
+; CHECK-NEXT: retq
entry:
-; CHECK: {{^}} testb $2, %bh
-; CHECK-NOT: test
-; CHECK: {{^}} je
%and = and i32 %flags, 512
%tobool = icmp eq i32 %and, 0
br i1 %tobool, label %if.end, label %if.then
diff --git a/test/CodeGen/X86/vector-shuffle-combining-xop.ll b/test/CodeGen/X86/vector-shuffle-combining-xop.ll
index 83001cf5fb9..dc08ad8a3de 100644
--- a/test/CodeGen/X86/vector-shuffle-combining-xop.ll
+++ b/test/CodeGen/X86/vector-shuffle-combining-xop.ll
@@ -1,8 +1,8 @@
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
-; RUN: llc < %s -mtriple=i686-unknown-unknown -mattr=+avx,+xop | FileCheck %s --check-prefix=X32
-; RUN: llc < %s -mtriple=i686-unknown-unknown -mattr=+avx2,+xop | FileCheck %s --check-prefix=X32
-; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx,+xop | FileCheck %s --check-prefix=X64
-; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx2,+xop | FileCheck %s --check-prefix=X64
+; RUN: llc < %s -mtriple=i686-unknown-unknown -mattr=+avx,+xop | FileCheck %s --check-prefix=X32 --check-prefix=X86AVX
+; RUN: llc < %s -mtriple=i686-unknown-unknown -mattr=+avx2,+xop | FileCheck %s --check-prefix=X32 --check-prefix=X86AVX2
+; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx,+xop | FileCheck %s --check-prefix=X64 --check-prefix=X64AVX
+; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=+avx2,+xop | FileCheck %s --check-prefix=X64 --check-prefix=X64AVX2
declare <2 x double> @llvm.x86.xop.vpermil2pd(<2 x double>, <2 x double>, <2 x i64>, i8) nounwind readnone
declare <4 x double> @llvm.x86.xop.vpermil2pd.256(<4 x double>, <4 x double>, <4 x i64>, i8) nounwind readnone
@@ -320,20 +320,35 @@ define <4 x i32> @combine_vpperm_10zz32BA(<4 x i32> %a0, <4 x i32> %a1) {
; FIXME: Duplicated load in i686
define void @buildvector_v4f32_0404(float %a, float %b, <4 x float>* %ptr) {
-; X32-LABEL: buildvector_v4f32_0404:
-; X32: # %bb.0:
-; X32-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-NEXT: vmovsd {{.*#+}} xmm0 = mem[0],zero
-; X32-NEXT: vinsertps {{.*#+}} xmm0 = xmm0[0,1],mem[0],xmm0[3]
-; X32-NEXT: vinsertps {{.*#+}} xmm0 = xmm0[0,1,2],mem[0]
-; X32-NEXT: vmovaps %xmm0, (%eax)
-; X32-NEXT: retl
+; X86AVX-LABEL: buildvector_v4f32_0404:
+; X86AVX: # %bb.0:
+; X86AVX-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86AVX-NEXT: vmovsd {{.*#+}} xmm0 = mem[0],zero
+; X86AVX-NEXT: vinsertps {{.*#+}} xmm0 = xmm0[0,1],mem[0],xmm0[3]
+; X86AVX-NEXT: vinsertps {{.*#+}} xmm0 = xmm0[0,1,2],mem[0]
+; X86AVX-NEXT: vmovaps %xmm0, (%eax)
+; X86AVX-NEXT: retl
;
-; X64-LABEL: buildvector_v4f32_0404:
-; X64: # %bb.0:
-; X64-NEXT: vpermil2ps {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[0],xmm1[0]
-; X64-NEXT: vmovaps %xmm0, (%rdi)
-; X64-NEXT: retq
+; X86AVX2-LABEL: buildvector_v4f32_0404:
+; X86AVX2: # %bb.0:
+; X86AVX2-NEXT: movl {{[0-9]+}}(%esp), %eax
+; X86AVX2-NEXT: vmovsd {{.*#+}} xmm0 = mem[0],zero
+; X86AVX2-NEXT: vmovddup {{.*#+}} xmm0 = xmm0[0,0]
+; X86AVX2-NEXT: vmovapd %xmm0, (%eax)
+; X86AVX2-NEXT: retl
+;
+; X64AVX-LABEL: buildvector_v4f32_0404:
+; X64AVX: # %bb.0:
+; X64AVX-NEXT: vpermil2ps {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[0],xmm1[0]
+; X64AVX-NEXT: vmovaps %xmm0, (%rdi)
+; X64AVX-NEXT: retq
+;
+; X64AVX2-LABEL: buildvector_v4f32_0404:
+; X64AVX2: # %bb.0:
+; X64AVX2-NEXT: vinsertps {{.*#+}} xmm0 = xmm0[0],xmm1[0],xmm0[2,3]
+; X64AVX2-NEXT: vmovddup {{.*#+}} xmm0 = xmm0[0,0]
+; X64AVX2-NEXT: vmovapd %xmm0, (%rdi)
+; X64AVX2-NEXT: retq
%v0 = insertelement <4 x float> undef, float %a, i32 0
%v1 = insertelement <4 x float> %v0, float %b, i32 1
%v2 = insertelement <4 x float> %v1, float %a, i32 2
diff --git a/test/CodeGen/X86/vector-shuffle-variable-256.ll b/test/CodeGen/X86/vector-shuffle-variable-256.ll
index 91672d07b05..0c806d76273 100644
--- a/test/CodeGen/X86/vector-shuffle-variable-256.ll
+++ b/test/CodeGen/X86/vector-shuffle-variable-256.ll
@@ -47,8 +47,7 @@ define <4 x double> @var_shuffle_v4f64_v4f64_uxx0_i64(<4 x double> %x, i64 %i0,
; ALL-NEXT: andl $3, %edx
; ALL-NEXT: andl $3, %esi
; ALL-NEXT: vmovaps %ymm0, (%rsp)
-; ALL-NEXT: vmovsd {{.*#+}} xmm0 = mem[0],zero
-; ALL-NEXT: vmovddup {{.*#+}} xmm0 = xmm0[0,0]
+; ALL-NEXT: vmovddup {{.*#+}} xmm0 = mem[0,0]
; ALL-NEXT: vmovsd {{.*#+}} xmm1 = mem[0],zero
; ALL-NEXT: vinsertf128 $1, %xmm1, %ymm0, %ymm0
; ALL-NEXT: movq %rbp, %rsp