英文字典中文字典


英文字典中文字典51ZiDian.com



中文字典辞典   英文字典 a   b   c   d   e   f   g   h   i   j   k   l   m   n   o   p   q   r   s   t   u   v   w   x   y   z       







请输入英文单字,中文词皆可:

economize    音标拼音: [ɪk'ɑnəm,ɑɪz] [ik'ɑnəm,ɑɪz]
vt.
vi. 节约,节省,有效地利用

节约,节省,有效地利用

economize
v 1: use cautiously and frugally; "I try to economize my spare
time"; "conserve your energy for the ascent to the summit"
[synonym: {conserve}, {husband}, {economize}, {economise}]
[ant: {blow}, {squander}, {waste}]
2: spend sparingly, avoid the waste of; "This move will save
money"; "The less fortunate will have to economize now" [synonym:
{save}, {economize}, {economise}]

Economize \E*con"o*mize\, v. i.
To be prudently sparing in expenditure; to be frugal and
saving; as, to economize in order to grow rich. [Written also
{economise}.] --Milton.
[1913 Webster]


Economize \E*con"o*mize\ ([-e]*k[o^]n"[-o]*m[imac]z), v. t.
[imp. & p. p. {Economized} ([-e]*k[o^]n"[-o]*m[imac]zd); p.
pr. & vb. n. {Economizing}.] [Cf. F. ['e]conomiser.]
To manage with economy; to use with prudence; to expend with
frugality; as, to economize one's income. [Written also
{economise}.]
[1913 Webster]

Expenses in the city were to be economized. --Jowett
(Thucyd. ).
[1913 Webster]

Calculating how to economize time. --W. Irving.
[1913 Webster]



安装中文字典英文字典查询工具!


中文字典英文字典工具:
选择颜色:
输入中英文单字

































































英文字典中文字典相关资料:


  • The mosaic memory of large language models - Nature
    Large language models learn from, and often memorize, training data Here, authors show they memorize far more than previously thought, by stitching together pieces of text like a mosaic This
  • Memory in Large Language Models: Mechanisms, Evaluation and Evolution
    The prevailing literature indicates that large language models do more than conditionally generate on a given context: across different stages of training and inference they form collections of states that can be stably addressed at subsequent time steps and influence outputs
  • A Comprehensive Analysis of Memorization in Large Language Models
    This paper presents a comprehensive study that investigates memorization in large language models (LLMs) from multiple perspectives Experiments are conducted with the Pythia and LLM-jp model suites, both of which offer LLMs with over 10B parameters and full access to their pre-training corpora
  • Memory in Large Language Models: Mechanisms . . . - Semantic Scholar
    A reproducible, comparable, and governable coordinate system for research and deployment and gives four testable propositions: minimum identifiability; a minimal evaluation card; causally constrained editing with verifiable forgetting; and when retrieval with small-window replay outperforms ultra-long-context reading Under a unified operational definition, we define LLM memory as a persistent
  • Towards large language models with human-like episodic memory
    Recent work in machine learning that augments large language models (LLMs) with external memory could potentially accomplish this, but current popular approaches are misaligned with human memory in various ways
  • Emergent and Predictable Memorization in Large Language Models 阅读笔记
    We hope that our focus on extrapolation will be compelling for future work to continue studying, and that our analyses inform deep learning practitioners on methods to understand and reduce memorization while training large language models
  • Memory Retrieval and Consolidation in Large Language Models through . . .
    The remarkable success of large language models (LLMs) stems from their ability to consolidate vast amounts of knowledge into the memory during pre-training and to retrieve it from the memory during inference, enabling advanced capabilities such as knowledge memorization, instruction-following and reasoning
  • Schrodinger’s Memory: Large Language Models - arXiv. org
    These models have acquired some human-like language capabilities and are already impacting daily life in areas such as machine translation, text summarization, sentiment analysis, question-answering systems, and text generation Despite their impressive performance, the study of LLMs’ memory mechanisms remains insufficient
  • Analyzing Memory Effects in Large Language Models through the Lens of . . .
    This study uses human memory research as a lens for understanding LLMs and systematically investigates human memory effects in state-of-the-art LLMs using paradigms drawn from psychological research We evaluate seven key memory phenomena, comparing human behavior to LLM performance
  • An evaluation on large language model outputs: Discourse and . . .
    We present an empirical evaluation of various outputs generated by nine of the most widely-available large language models (LLMs) Our analysis is done with off-the-shelf, readily-available tools We find a correlation between percentage of memorized text, percentage of unique text, and overall output quality, when measured with respect to output pathologies such as counterfactual and





中文字典-英文字典  2005-2009