Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/torvalds/linux.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2022-06-20xtensa: change '.bss' to '.section .bss'Max Filippov
For some reason (ancient assembler?) the following build error is reported by the kisskb: kisskb/src/arch/xtensa/kernel/entry.S: Error: unknown pseudo-op: `.bss': => 2176 Change abbreviated '.bss' to the full '.section .bss, "aw"' to fix this error. Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-06-19xtensa: xtfpga: Fix refcount leak bug in setupLiang He
In machine_setup(), of_find_compatible_node() will return a node pointer with refcount incremented. We should use of_node_put() when it is not used anymore. Cc: stable@vger.kernel.org Signed-off-by: Liang He <windhl@126.com> Message-Id: <20220617115323.4046905-1-windhl@126.com> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-06-19xtensa: Fix refcount leak bug in time.cLiang He
In calibrate_ccount(), of_find_compatible_node() will return a node pointer with refcount incremented. We should use of_node_put() when it is not used anymore. Cc: stable@vger.kernel.org Signed-off-by: Liang He <windhl@126.com> Message-Id: <20220617124432.4049006-1-windhl@126.com> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-06-04Merge tag 'ptrace_stop-cleanup-for-v5.19' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace Pull ptrace_stop cleanups from Eric Biederman: "While looking at the ptrace problems with PREEMPT_RT and the problems Peter Zijlstra was encountering with ptrace in his freezer rewrite I identified some cleanups to ptrace_stop that make sense on their own and move make resolving the other problems much simpler. The biggest issue is the habit of the ptrace code to change task->__state from the tracer to suppress TASK_WAKEKILL from waking up the tracee. No other code in the kernel does that and it is straight forward to update signal_wake_up and friends to make that unnecessary. Peter's task freezer sets frozen tasks to a new state TASK_FROZEN and then it stores them by calling "wake_up_state(t, TASK_FROZEN)" relying on the fact that all stopped states except the special stop states can tolerate spurious wake up and recover their state. The state of stopped and traced tasked is changed to be stored in task->jobctl as well as in task->__state. This makes it possible for the freezer to recover tasks in these special states, as well as serving as a general cleanup. With a little more work in that direction I believe TASK_STOPPED can learn to tolerate spurious wake ups and become an ordinary stop state. The TASK_TRACED state has to remain a special state as the registers for a process are only reliably available when the process is stopped in the scheduler. Fundamentally ptrace needs acess to the saved register values of a task. There are bunch of semi-random ptrace related cleanups that were found while looking at these issues. One cleanup that deserves to be called out is from commit 57b6de08b5f6 ("ptrace: Admit ptrace_stop can generate spuriuos SIGTRAPs"). This makes a change that is technically user space visible, in the handling of what happens to a tracee when a tracer dies unexpectedly. According to our testing and our understanding of userspace nothing cares that spurious SIGTRAPs can be generated in that case" * tag 'ptrace_stop-cleanup-for-v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace: sched,signal,ptrace: Rework TASK_TRACED, TASK_STOPPED state ptrace: Always take siglock in ptrace_resume ptrace: Don't change __state ptrace: Admit ptrace_stop can generate spuriuos SIGTRAPs ptrace: Document that wait_task_inactive can't fail ptrace: Reimplement PTRACE_KILL by always sending SIGKILL signal: Use lockdep_assert_held instead of assert_spin_locked ptrace: Remove arch_ptrace_attach ptrace/xtensa: Replace PT_SINGLESTEP with TIF_SINGLESTEP ptrace/um: Replace PT_DTRACE with TIF_SINGLESTEP signal: Replace __group_send_sig_info with send_signal_locked signal: Rename send_signal send_signal_locked
2022-06-04Merge tag 'kthread-cleanups-for-v5.19' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace Pull kthread updates from Eric Biederman: "This updates init and user mode helper tasks to be ordinary user mode tasks. Commit 40966e316f86 ("kthread: Ensure struct kthread is present for all kthreads") caused init and the user mode helper threads that call kernel_execve to have struct kthread allocated for them. This struct kthread going away during execve in turned made a use after free of struct kthread possible. Here, commit 343f4c49f243 ("kthread: Don't allocate kthread_struct for init and umh") is enough to fix the use after free and is simple enough to be backportable. The rest of the changes pass struct kernel_clone_args to clean things up and cause the code to make sense. In making init and the user mode helpers tasks purely user mode tasks I ran into two complications. The function task_tick_numa was detecting tasks without an mm by testing for the presence of PF_KTHREAD. The initramfs code in populate_initrd_image was using flush_delayed_fput to ensuere the closing of all it's file descriptors was complete, and flush_delayed_fput does not work in a userspace thread. I have looked and looked and more complications and in my code review I have not found any, and neither has anyone else with the code sitting in linux-next" * tag 'kthread-cleanups-for-v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace: sched: Update task_tick_numa to ignore tasks without an mm fork: Stop allowing kthreads to call execve fork: Explicitly set PF_KTHREAD init: Deal with the init process being a user mode process fork: Generalize PF_IO_WORKER handling fork: Explicity test for idle tasks in copy_thread fork: Pass struct kernel_clone_args into copy_thread kthread: Don't allocate kthread_struct for init and umh
2022-05-24Merge tag 'random-5.19-rc1-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/crng/random Pull random number generator updates from Jason Donenfeld: "These updates continue to refine the work began in 5.17 and 5.18 of modernizing the RNG's crypto and streamlining and documenting its code. New for 5.19, the updates aim to improve entropy collection methods and make some initial decisions regarding the "premature next" problem and our threat model. The cloc utility now reports that random.c is 931 lines of code and 466 lines of comments, not that basic metrics like that mean all that much, but at the very least it tells you that this is very much a manageable driver now. Here's a summary of the various updates: - The random_get_entropy() function now always returns something at least minimally useful. This is the primary entropy source in most collectors, which in the best case expands to something like RDTSC, but prior to this change, in the worst case it would just return 0, contributing nothing. For 5.19, additional architectures are wired up, and architectures that are entirely missing a cycle counter now have a generic fallback path, which uses the highest resolution clock available from the timekeeping subsystem. Some of those clocks can actually be quite good, despite the CPU not having a cycle counter of its own, and going off-core for a stamp is generally thought to increase jitter, something positive from the perspective of entropy gathering. Done very early on in the development cycle, this has been sitting in next getting some testing for a while now and has relevant acks from the archs, so it should be pretty well tested and fine, but is nonetheless the thing I'll be keeping my eye on most closely. - Of particular note with the random_get_entropy() improvements is MIPS, which, on CPUs that lack the c0 count register, will now combine the high-speed but short-cycle c0 random register with the lower-speed but long-cycle generic fallback path. - With random_get_entropy() now always returning something useful, the interrupt handler now collects entropy in a consistent construction. - Rather than comparing two samples of random_get_entropy() for the jitter dance, the algorithm now tests many samples, and uses the amount of differing ones to determine whether or not jitter entropy is usable and how laborious it must be. The problem with comparing only two samples was that if the cycle counter was extremely slow, but just so happened to be on the cusp of a change, the slowness wouldn't be detected. Taking many samples fixes that to some degree. This, combined with the other improvements to random_get_entropy(), should make future unification of /dev/random and /dev/urandom maybe more possible. At the very least, were we to attempt it again today (we're not), it wouldn't break any of Guenter's test rigs that broke when we tried it with 5.18. So, not today, but perhaps down the road, that's something we can revisit. - We attempt to reseed the RNG immediately upon waking up from system suspend or hibernation, making use of the various timestamps about suspend time and such available, as well as the usual inputs such as RDRAND when available. - Batched randomness now falls back to ordinary randomness before the RNG is initialized. This provides more consistent guarantees to the types of random numbers being returned by the various accessors. - The "pre-init injection" code is now gone for good. I suspect you in particular will be happy to read that, as I recall you expressing your distaste for it a few months ago. Instead, to avoid a "premature first" issue, while still allowing for maximal amount of entropy availability during system boot, the first 128 bits of estimated entropy are used immediately as it arrives, with the next 128 bits being buffered. And, as before, after the RNG has been fully initialized, it winds up reseeding anyway a few seconds later in most cases. This resulted in a pretty big simplification of the initialization code and let us remove various ad-hoc mechanisms like the ugly crng_pre_init_inject(). - The RNG no longer pretends to handle the "premature next" security model, something that various academics and other RNG designs have tried to care about in the past. After an interesting mailing list thread, these issues are thought to be a) mainly academic and not practical at all, and b) actively harming the real security of the RNG by delaying new entropy additions after a potential compromise, making a potentially bad situation even worse. As well, in the first place, our RNG never even properly handled the premature next issue, so removing an incomplete solution to a fake problem was particularly nice. This allowed for numerous other simplifications in the code, which is a lot cleaner as a consequence. If you didn't see it before, https://lore.kernel.org/lkml/YmlMGx6+uigkGiZ0@zx2c4.com/ may be a thread worth skimming through. - While the interrupt handler received a separate code path years ago that avoids locks by using per-cpu data structures and a faster mixing algorithm, in order to reduce interrupt latency, input and disk events that are triggered in hardirq handlers were still hitting locks and more expensive algorithms. Those are now redirected to use the faster per-cpu data structures. - Rather than having the fake-crypto almost-siphash-based random32 implementation be used right and left, and in many places where cryptographically secure randomness is desirable, the batched entropy code is now fast enough to replace that. - As usual, numerous code quality and documentation cleanups. For example, the initialization state machine now uses enum symbolic constants instead of just hard coding numbers everywhere. - Since the RNG initializes once, and then is always initialized thereafter, a pretty heavy amount of code used during that initialization is never used again. It is now completely cordoned off using static branches and it winds up in the .text.unlikely section so that it doesn't reduce cache compactness after the RNG is ready. - A variety of functions meant for waiting on the RNG to be initialized were only used by vsprintf, and in not a particularly optimal way. Replacing that usage with a more ordinary setup made it possible to remove those functions. - A cleanup of how we warn userspace about the use of uninitialized /dev/urandom and uninitialized get_random_bytes() usage. Interestingly, with the change you merged for 5.18 that attempts to use jitter (but does not block if it can't), the majority of users should never see those warnings for /dev/urandom at all now, and the one for in-kernel usage is mainly a debug thing. - The file_operations struct for /dev/[u]random now implements .read_iter and .write_iter instead of .read and .write, allowing it to also implement .splice_read and .splice_write, which makes splice(2) work again after it was broken here (and in many other places in the tree) during the set_fs() removal. This was a bit of a last minute arrival from Jens that hasn't had as much time to bake, so I'll be keeping my eye on this as well, but it seems fairly ordinary. Unfortunately, read_iter() is around 3% slower than read() in my tests, which I'm not thrilled about. But Jens and Al, spurred by this observation, seem to be making progress in removing the bottlenecks on the iter paths in the VFS layer in general, which should remove the performance gap for all drivers. - Assorted other bug fixes, cleanups, and optimizations. - A small SipHash cleanup" * tag 'random-5.19-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random: (49 commits) random: check for signals after page of pool writes random: wire up fops->splice_{read,write}_iter() random: convert to using fops->write_iter() random: convert to using fops->read_iter() random: unify batched entropy implementations random: move randomize_page() into mm where it belongs random: remove mostly unused async readiness notifier random: remove get_random_bytes_arch() and add rng_has_arch_random() random: move initialization functions out of hot pages random: make consistent use of buf and len random: use proper return types on get_random_{int,long}_wait() random: remove extern from functions in header random: use static branch for crng_ready() random: credit architectural init the exact amount random: handle latent entropy and command line from random_init() random: use proper jiffies comparison macro random: remove ratelimiting for in-kernel unseeded randomness random: move initialization out of reseeding hot path random: avoid initializing twice in credit race random: use symbolic constants for crng_init states ...
2022-05-23xtensa: Return true/false (not 1/0) from bool functionYang Li
Return boolean values ("true" or "false") instead of 1 or 0 from bool function. This fixes the following warnings from coccicheck: ./arch/xtensa/kernel/traps.c:304:10-11: WARNING: return of 0/1 in function 'check_div0' with return type bool Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Message-Id: <20220518230953.112266-1-yang.lee@linux.alibaba.com> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-17xtensa: improve call0 ABI probingMax Filippov
When call0 userspace ABI support by probing is enabled instructions that cause illegal instruction exception when PS.WOE is clear are retried with PS.WOE set before calling c-level exception handler. Record user pc at which PS.WOE was set in the fast exception handler and clear PS.WOE in the c-level exception handler if we get there from the same address. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-17xtensa: support artificial division by 0 exceptionMax Filippov
On xtensa cores wihout hardware division option division support functions from libgcc react to division by 0 attempt by executing illegal instruction followed by the characters 'DIV0'. Recognize this pattern in illegal instruction exception handler and convert it to division by 0. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-14xtensa: use fallback for random_get_entropy() instead of zeroJason A. Donenfeld
In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. This is accomplished by just including the asm-generic code like on other architectures, which means we can get rid of the empty stub function here. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Arnd Bergmann <arnd@arndb.de> Acked-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-05-13xtensa: add trap handler for division by zeroMax Filippov
Add c-level handler for the division by zero exception and kill the task if it was thrown from the kernel space or send SIGFPE otherwise. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-11ptrace/xtensa: Replace PT_SINGLESTEP with TIF_SINGLESTEPEric W. Biederman
xtensa is the last user of the PT_SINGLESTEP flag. Changing tsk->ptrace in user_enable_single_step and user_disable_single_step without locking could potentiallly cause problems. So use a thread info flag instead of a flag in tsk->ptrace. Use TIF_SINGLESTEP that xtensa already had defined but unused. Remove the definitions of PT_SINGLESTEP and PT_BLOCKSTEP as they have no more users. Cc: stable@vger.kernel.org Acked-by: Max Filippov <jcmvbkbc@gmail.com> Tested-by: Kees Cook <keescook@chromium.org> Reviewed-by: Oleg Nesterov <oleg@redhat.com> Link: https://lkml.kernel.org/r/20220505182645.497868-4-ebiederm@xmission.com Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2022-05-11xtensa/simdisk: fix proc_read_simdisk()Yi Yang
The commit a69755b18774 ("xtensa simdisk: switch to proc_create_data()") split read operation into two parts, first retrieving the path when it's non-null and second retrieving the trailing '\n'. However when the path is non-null the first simple_read_from_buffer updates ppos, and the second simple_read_from_buffer returns 0 if ppos is greater than 1 (i.e. almost always). As a result reading from that proc file is almost always empty. Fix it by making a temporary copy of the path with the trailing '\n' and using simple_read_from_buffer on that copy. Cc: stable@vger.kernel.org Fixes: a69755b18774 ("xtensa simdisk: switch to proc_create_data()") Signed-off-by: Yi Yang <yiyang13@huawei.com> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-11xtensa: no need to initialise statics to 0Jason Wang
Static variables do not need to be initialised to 0, because compiler will initialise all uninitialised statics to 0. Thus, remove the unneeded initializations. Signed-off-by: Jason Wang <wangborong@cdjrlc.com> Message-Id: <20220508022910.98481-1-wangborong@cdjrlc.com> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-07fork: Generalize PF_IO_WORKER handlingEric W. Biederman
Add fn and fn_arg members into struct kernel_clone_args and test for them in copy_thread (instead of testing for PF_KTHREAD | PF_IO_WORKER). This allows any task that wants to be a user space task that only runs in kernel mode to use this functionality. The code on x86 is an exception and still retains a PF_KTHREAD test because x86 unlikely everything else handles kthreads slightly differently than user space tasks that start with a function. The functions that created tasks that start with a function have been updated to set ".fn" and ".fn_arg" instead of ".stack" and ".stack_size". These functions are fork_idle(), create_io_thread(), kernel_thread(), and user_mode_thread(). Link: https://lkml.kernel.org/r/20220506141512.516114-4-ebiederm@xmission.com Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2022-05-07fork: Pass struct kernel_clone_args into copy_threadEric W. Biederman
With io_uring we have started supporting tasks that are for most purposes user space tasks that exclusively run code in kernel mode. The kernel task that exec's init and tasks that exec user mode helpers are also user mode tasks that just run kernel code until they call kernel execve. Pass kernel_clone_args into copy_thread so these oddball tasks can be supported more cleanly and easily. v2: Fix spelling of kenrel_clone_args on h8300 Link: https://lkml.kernel.org/r/20220506141512.516114-2-ebiederm@xmission.com Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2022-05-02xtensa: clean up labels in the kernel entry assemblyMax Filippov
Don't use numeric labels for long jumps, use named local labels instead. Avoid conditional label definition. No functional changes. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: don't leave invalid TLB entry in fast_store_prohibitedMax Filippov
When fast_store_prohibited needs to go to the C-level exception handler it leaves TLB entry that caused page fault in the TLB. If the faulting task gets switched to a different CPU and completes page table update there the TLB entry will get out of sync with the page table which may cause a livelock on access to that page. Invalidate faulting TLB entry on a slow path exit from the fast_store_prohibited. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: fix declaration of _SecondaryResetVector_text_*Max Filippov
Secondary reset vector is defined, compiled and used when CONFIG_SECONDARY_RESET_VECTOR is enabled, not only on SMP. Make declarations of _SecondaryResetVector_text_* symbols available accordingly. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: enable ARCH_HAS_DEBUG_VM_PGTABLEMax Filippov
xtensa kernels successfully build and run with CONFIG_DEBUG_VM_PGTABLE=y, enable arch support for it. Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: add hibernation supportMax Filippov
Define ARCH_HIBERNATION_POSSIBLE in Kconfig and implement hibernation callbacks. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: support coprocessors on SMPMax Filippov
Current coprocessor support on xtensa only works correctly on uniprocessor configurations. Make it work on SMP too and keep it lazy. Make coprocessor_owner array per-CPU and move it to struct exc_table for easy access from the fast_coprocessor exception handler. Allow task to have live coprocessors only on single CPU, record this CPU number in the struct thread_info::cp_owner_cpu. Change struct thread_info::cpenable meaning to be 'coprocessors live on cp_owner_cpu'. Introduce C-level coprocessor exception handler that flushes and releases live coprocessors of the task taking 'coprocessor disabled' exception and call it from the fast_coprocessor handler when the task has live coprocessors on other CPU. Make coprocessor_flush_all and coprocessor_release_all work correctly when called from any CPU by sending IPI to the cp_owner_cpu. Add function coprocessor_flush_release_all to do flush followed by release atomically. Add function local_coprocessors_flush_release_all to flush and release all coprocessors on the local CPU and use it to flush coprocessor contexts from the CPU that goes offline. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: get rid of stack frame in coprocessor_flushMax Filippov
coprocessor_flush is an ordinary function, it can use all registers. Don't reserve stack frame for it and use a7 to preserve a0 around the context saving call. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: merge SAVE_CP_REGS_TAB and LOAD_CP_REGS_TABMax Filippov
Both tables share the same offset field but the different function pointers. Merge them into single table with 3-element entries to reduce code and data duplication. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: add xtensa_xsr macroMax Filippov
xtensa_xsr does the XSR instruction for the specified special register. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: handle coprocessor exceptions in kernel modeMax Filippov
In order to let drivers use xtensa coprocessors on behalf of the calling process the kernel must handle coprocessor exceptions from the kernel mode the same way as from the user mode. This is not sufficient to allow using coprocessors transparently in IRQ or softirq context. Should such users exist they must be aware of the context and do the right thing, e.g. preserve the coprocessor state and resore it after use. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: use callx0 opcode in fast_coprocessorMax Filippov
Instead of emulating call0 in fast_coprocessor use that opcode directly. Use 'ret' instead of 'jx a0'. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: clean up excsave1 initializationMax Filippov
Use xtensa_set_sr instead of inline assembly. Rename local variable exc_table in early_trap_init to avoid conflict with per-CPU variable of the same name. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: clean up declarations in coprocessor.hMax Filippov
Drop 'extern' from all function declarations. Add parameter names in declarations. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: clean up exception handler prototypesMax Filippov
Exception handlers are currently passed as void pointers because they may have one or two parameters. Only two handlers uses the second parameter and it is available in the struct pt_regs anyway. Make all handlers have only one parameter, introduce xtensa_exception_handler type for handlers and use it in trap_set_handler. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: clean up function declarations in traps.cMax Filippov
Drop 'extern' from all function declarations and move those that need to be visible from traps.c to traps.h. Add 'asmlinkage' to declarations of fucntions defined in assembly. Add 'static' to declarations and definitions only used locally. Add argument names in declarations. Drop unused second argument from do_multihit and do_page_fault. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: enable KCSANMax Filippov
Prefix arch-specific barrier macros with '__' to make use of instrumented generic macros. Prefix arch-specific bitops with 'arch_' to make use of instrumented generic functions. Provide stubs for 64-bit atomics when building with KCSAN. Disable KCSAN instrumentation in arch/xtensa/boot. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Acked-by: Marco Elver <elver@google.com>
2022-05-02xtensa: enable HAVE_VIRT_CPU_ACCOUNTING_GENMax Filippov
There's no direct cputime_t manipulation in the xtensa arch code, so generic virt CPU accounting may be enabled. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: enable context trackingMax Filippov
Put user exit context tracking call on the common kernel entry/exit path (function calls are impossible at earlier kernel entry stages because PS.EXCM is not cleared yet). Put user entry context tracking call on the user exit path. Syscalls go through this common code too, so nothing specific needs to be done for them. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: use abi_* register names in the kernel exit codeMax Filippov
Using plain register names is prone to errors when code is changed and new calls are added between the register load and use. Change plain register names to abi_* names in the call-heavy part of the kernel exit code to clearly indicate what's supposed to be preserved and what's not. Re-align code while at it. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: move trace_hardirqs_off call back to entry.SMax Filippov
Context tracking call must be done after hardirq tracking call, otherwise lockdep_assert_irqs_disabled called from rcu_eqs_exit gives a warning. To avoid context tracking logic duplication for IRQ/exception entry paths move trace_hardirqs_off call back to common entry code. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: drop dead code from entry.SMax Filippov
KERNEL_STACK_OVERFLOW_CHECK is incomplete and have never been enabled. Remove it. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: noMMU: allow handling protection faultsMax Filippov
Many xtensa CPU cores without full MMU still have memory protection features capable of raising exceptions for invalid instruction fetches/data access. Allow handling such exceptions. This improves behavior of processes that pass invalid memory pointers to syscalls in noMMU configs: in case of exception the kernel instead of killing the process is now able to return -EINVAL from a syscall. Introduce CONFIG_PFAULT that controls whether protection fault code is enabled and register handlers for common memory protection exceptions when it is enabled. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: extract vmalloc_fault code into a functionMax Filippov
Move full MMU-specific code into a separate function to isolate it from more generic do_page_fault code. No functional changes. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: move asid_cache from fault.c to mmu.cMax Filippov
asid_cache is only useful with full MMU, but fault.c is also useful with MPU. Move asid_cache definition to MMU-specific source file. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: iss: extract and constify network callbacksMax Filippov
Instead of storing pointers to callback functions in the struct iss_net_private::tp move them to struct struct iss_net_ops and store a const pointer to it. Make static const tuntap_ops structure with tuntap callbacks and initialize tp.net_ops with it in the tuntap_probe. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: iss: clean up per-device locking in network driverMax Filippov
Per-device locking in the ISS network driver is used to protect poll timer and stats updates. Stat collection is not protected. Remove per-device locking everywhere except the stats updates. Replace ndo_get_stats callback with ndo_get_stats64 and use proper locking there as well. As a side effect this fixes possible deadlock between iss_net_close and iss_net_timer. Reported by: Duoming Zhou <duoming@zju.edu.cn> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: iss: replace iss_net_set_mac with eth_mac_addrMax Filippov
iss_net_set_mac is just a copy of eth_mac_addr with pointless locking. Drop this function and replace it with eth_mac_addr. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: iss: drop opened_list logic from the network driverMax Filippov
opened_list is used to poll all opened devices in the timer callback, but there's individual timer that is associated with each device. Drop opened_list and only poll the device that is associated with the timer in the timer callback. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-05-02xtensa: localize labels used in memmoveMax Filippov
Internal labels in the memmove implementation don't need to be visible, localize them by prefixing their names with '.L'. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-04-16xtensa: fix a7 clobbering in coprocessor context load/storeMax Filippov
Fast coprocessor exception handler saves a3..a6, but coprocessor context load/store code uses a4..a7 as temporaries, potentially clobbering a7. 'Potentially' because coprocessor state load/store macros may not use all four temporary registers (and neither FPU nor HiFi macros do). Use a3..a6 as intended. Cc: stable@vger.kernel.org Fixes: c658eac628aa ("[XTENSA] Add support for configurable registers and coprocessors") Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-04-13arch: xtensa: platforms: Fix deadlock in rs_close()Duoming Zhou
There is a deadlock in rs_close(), which is shown below: (Thread 1) | (Thread 2) | rs_open() rs_close() | mod_timer() spin_lock_bh() //(1) | (wait a time) ... | rs_poll() del_timer_sync() | spin_lock() //(2) (wait timer to stop) | ... We hold timer_lock in position (1) of thread 1 and use del_timer_sync() to wait timer to stop, but timer handler also need timer_lock in position (2) of thread 2. As a result, rs_close() will block forever. This patch deletes the redundant timer_lock in order to prevent the deadlock. Because there is no race condition between rs_close, rs_open and rs_poll. Signed-off-by: Duoming Zhou <duoming@zju.edu.cn> Message-Id: <20220407154430.22387-1-duoming@zju.edu.cn> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-04-13xtensa: patch_text: Fixup last cpu should be masterGuo Ren
These patch_text implementations are using stop_machine_cpuslocked infrastructure with atomic cpu_count. The original idea: When the master CPU patch_text, the others should wait for it. But current implementation is using the first CPU as master, which couldn't guarantee the remaining CPUs are waiting. This patch changes the last CPU as the master to solve the potential risk. Fixes: 64711f9a47d4 ("xtensa: implement jump_label support") Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@kernel.org> Reviewed-by: Max Filippov <jcmvbkbc@gmail.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: <stable@vger.kernel.org> Message-Id: <20220407073323.743224-4-guoren@kernel.org> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2022-03-31Merge tag 'kbuild-v5.18-v2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild Pull Kbuild updates from Masahiro Yamada: - Add new environment variables, USERCFLAGS and USERLDFLAGS to allow additional flags to be passed to user-space programs. - Fix missing fflush() bugs in Kconfig and fixdep - Fix a minor bug in the comment format of the .config file - Make kallsyms ignore llvm's local labels, .L* - Fix UAPI compile-test for cross-compiling with Clang - Extend the LLVM= syntax to support LLVM=<suffix> form for using a particular version of LLVm, and LLVM=<prefix> form for using custom LLVM in a particular directory path. - Clean up Makefiles * tag 'kbuild-v5.18-v2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: kbuild: Make $(LLVM) more flexible kbuild: add --target to correctly cross-compile UAPI headers with Clang fixdep: use fflush() and ferror() to ensure successful write to files arch: syscalls: simplify uapi/kapi directory creation usr/include: replace extra-y with always-y certs: simplify empty certs creation in certs/Makefile certs: include certs/signing_key.x509 unconditionally kallsyms: ignore all local labels prefixed by '.L' kconfig: fix missing '# end of' for empty menu kconfig: add fflush() before ferror() check kbuild: replace $(if A,A,B) with $(or A,B) kbuild: Add environment variables for userprogs flags kbuild: unify cmd_copy and cmd_shipped
2022-03-31arch: syscalls: simplify uapi/kapi directory creationMasahiro Yamada
$(shell ...) expands to empty. There is no need to assign it to _dummy. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>