| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
| |
To me, this seems pretty useful for debugging purposes. It might
also be of general interest to the curious user.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is some discussion on the editorconfig side
about whether a 'spell_language' key could be added
(cf. https://github.com/editorconfig/editorconfig/issues/315,
https://github.com/seifferth/vis-editorconfig/pull/8). This key would
specify the natural language a file is written in and it would then
be up to the editor or plugin doing the spellchecking to respect that
setting and behave accordingly.
Since it is out of the scope of the vis-editorconfig plugin to
implement spellchecking, and since I would like the vis-editorconfig
plugin to work both with and without vis-spellcheck, I suggest
to use 'vis.win.file.spell_language' to store the document
language. With commit 0ee415c in the vis-editorconfig repo
(https://github.com/seifferth/vis-editorconfig), this value is already
set to the appropriate value. This commit adjusts the spellchecking
plugin to use the same global value. I did some testing that suggests
it should work.
There might still be some hickups if 'vis.win.file' does not exist.
This is a non-issue for editorconfig, since editorconfig only works
if 'vis.win.file.path' exists (which configuration is used depends
on the path). The Readme would also need some adjustment.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
check if "LANG" environment variable exists before indexing it
|
|/ |
|
|
|
|
|
|
| |
* Remove unused variables
* Remove unused assginments
* Fix one actual bug (typo in variable name)
|
| |
|
|\
| |
| | |
Use single quotes around plugin path
|
|/ |
|
|
|
|
|
|
|
| |
The init.lua file is only a thin wrapper around a dofile call which loads
the actual plugin code in spellcheck.lua and returns the result.
Closes #5.
|
|
|
|
|
|
|
| |
Because the lexer.lex wrapper function is a closure it saves the
state of the ignored word table in the closure.
Therefore to see the effect of ignoring words we must rebuild the wrapper
closure.
|
|\
| |
| | |
Fix arrows in vis-menu by using vis:pipe()
|
|/ |
|
|\ |
|
| | |
|
| |
| |
| |
| |
| | |
If the typo itself contains magic pattern characters we can't reliable find
it in the text.
|
| |
| |
| |
| |
| |
| | |
An iterator in lua stops if it returns nil.
This change will get a new typo from the unfiltered_iterator if
we encounter an empty one instead of returning nil.
|
| |
| |
| |
| |
| | |
apparently in the typo list produced by aspell there is a blank line
which messes with our search for typos.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Before this change we assumed that when we are at the beginning of the
text (index == 1) and don't find the typo is must be the beginning of the
text but it can also be the end of the text.
We now also check the end if the beginning was not what we were lookig for.
|
| | |
|
| |
| |
| |
| |
| | |
This activates spellchecking for most "text focused" languages like
latex or markdown.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This happens because the "default" token gets emitted for each not matched
char resulting in lots of small "default" tokens.
This markdown:
> typu
will result in the token stream
{"default", 2, "default", 3, "default", 4, "default", 5}
and a typo stream with a single entry spanning all of the tokens
{{"typu", 1, 5}}.
Not checking if the typo is longer then the current token leads to an infinite
loop where the typo is always inserted in the new token stream without advancing
the token stream.
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Our pattern [%A]<typo>[%A] does not match typos at the end or start of the text
because there is no leading / trailing non letter character.
So if we dind't find the typo with our normal pattern it must be either the start
or the end of the text. If not raise a warning that this must be a bug.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
of typos
This should make our spellchecking independent of the current window.
|
| |
| |
| |
| |
| | |
If the end of a typo equals the end of a token we didn't advance the
typo stream entering an infinite loop.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When naivingly searching for the typo in our text we might find it
before its actual appearense in a correct word containing the typo
(e.g. "broken ok" would find the typo "ok" in the middle of the word "broken").
To prevent this we now only find a typo if its enclosed in non-letter
characters, this prevents typos from being found in regular words.
Typos starting with '"' for example are still found correctly.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We collect all typos for the current viewport (This is not sound because
the viewport we get from vis.win.viewport must not be from the window we currently lex).
Then we iterate over both the token stream and the typo stream,
adding the token end to the new token stream if it is smaller than the start of
the current typo or not a token we spellcheck and advance the token stream.
Otherwise we add the token part before our current typo if present and
the highlight token representing the typo and advance our typo stream.
After we handled all typos we add each leftover token to the new token stream.
Typos are cached and reused if the possible viewport and the data
we lex are unchanged. (This is sound because, either data is unchanged then we may
call the external spellchecker for nothing, or data is the same as last time
then the typos haven't changed)
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
This allows us to be independent of a vis.window and only use
the text passed to lexer.lex for our spellchecking.
This is helpfull when the active window is for example the command
prompt but the original source file still gets lexed.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Add get_typos(range) to retrieve a string of misspelled word in a specific file range
by calling the spellchecker's list command. It return nil or a string with each
misspell followed by a newline.
Introduce typo_iter(text, typos, ignored) to iterate over all not ignored
typos and their positions in text. It returns an statefull iterator function
closure. Which will return the next typo and ist start and finish in the text,
starting by 1.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fallback to the old mechanism if no lexer is active.
Algorithm:
1. Wrap the active lexer's lex function and save it original value to restore it
2. Obtain the original token stream
3. Filter the tokens for visibility and if we want to spellcheck them (lexers.COMMENT and lexers.STRING by default)
4. Spellcheck the selected token. This is why this is unusably slow we shell out to our spellchecker for each token to check.
5. Split up the selected token into its correct and misspelled parts
6. Insert the correct parts unmodified and the misspelled ones as lexers.ERROR tokens into the token stream
Drawbacks:
1. It is unusably slow for now
2. The problem to decide which tokens should be spellchecked still remains.
A lexer using different token names than we expect will prevent our spellchecking
TODO:
1. Introduce a way to disable syntax aware spellchecking and always use
the old check full viewport approach
2. Speedup the syntax aware spellchecking (Maybe by splitting it into two iterations:
First collect all toens to spellcheck and shell out once to spellcheck them all
Second build up the token stream using the typo information we got)
|
|/ |
|
|
|
|
| |
Lua indexing starts at 1 not 0.
|