We have SharePoint Server 2013 spread over 4 WFEs, 2 app servers. The search topology generally crawls using 3 servers and content processing also happens through 3 servers. Index replication is across couple of servers. We have multiple content sources. One content source is dedicated to crawling 1500+ team sites. The content size is about 1 TB. There are other content sources dedicated to a relatively smaller size like Intranet. All sites go through full crawls over the weekend.
For many months, we noticed that the team site full crawl causes degradation of index. Sometimes it degrades both replicas. We tried our best to troubleshoot. We opened tickets with Microsoft. Microsoft asked us many search reports.
We have page files on our servers. Our platform team determines that the page file does not have to be the recommended page file size and had it setup at a smaller value. Then one fine day, my supervisor wondered what if we make the page file size set up to the recommended value which is 1.5 times the physical memory on the servers. As soon as this was done, the problem went away magically. It’s been more than a month and the problem did not come back.