In this talk, we will discuss the strengths and limitations of LLMs for code analysis tasks like code search and code clone detection. We will show when the LLMs make mistakes and what kinds of mistakes they make. For example, we observe that the performance of popular LLMs heavily relies on the well-defined variable and function names, therefore, they will make mistakes when some misleading variable name is given. Anyone interested in exploring the intersection of AI and code security analysis can attend this talk.