• 主页
  • 相册
  • 随笔
  • 目录
  • 存档
Total 244
Search AboutMe

  • 主页
  • 相册
  • 随笔
  • 目录
  • 存档

实验:LL1-LR计算器

2019-12-12

内容

  • LL1计算器

1. 原理

  • 略

2. LL1

抄袭并用ply改造

词法分析

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
import ply.lex as lex

# List of token names. This is always required
tokens_op = (
'PLUS',
'MINUS',
'TIMES',
'DIVIDE',
'LPAREN',
'RPAREN',)
tokens_num = (
'NUMBER',
)
tokens_end = (
'END',
)
tokens = tokens_op+tokens_num+tokens_end
# Regular expression rules for simple tokens
t_PLUS = r'\+'
t_MINUS = r'-'
t_TIMES = r'\*'
t_DIVIDE = r'/'
t_LPAREN = r'\('
t_RPAREN = r'\)'
t_END = r'\#'
# A regular expression rule with some action code


def t_NUMBER(t):
r'[0-9]*\.?[0-9]+((E|e)(\+|-)?[0-9]+)?'
t.value = eval(t.value)
return t

# Define a rule so we can track line numbers


def t_newline(t):
r'\n+'
t.lexer.lineno += len(t.value)


# A string containing ignored characters (spaces and tabs)
t_ignore = ' \t'

# Error handling rule


def t_error(t):
print("Illegal character '%s'" % t.value[0])
t.lexer.skip(1)

语法分析

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
from chap0x07_Lex import *

cur_tok = None
all_toks = None


def tlex(data):
data += '#'
lexer = lex.lex()
lexer.input(data)
while True:
tok = lexer.token()
if not tok:
break
yield(tok)


def get_token():
global cur_tok
cur_tok = all_toks.__next__()


def match(_, val=''):
global cur_tok
t, v = cur_tok.type, cur_tok.value
if t in _ or t in tokens_op and v == val:
get_token()
else:
raise


def expr():
global cur_tok
tmp = term()
t, v = cur_tok.type, cur_tok.value
while v == '+' or v == '-':
match(tokens_op)
rhs = term()
e = str(tmp) + str(v) + str(rhs)
tmp = eval(e)
print(e, '=', tmp)
t, v = cur_tok.type, cur_tok.value
return tmp


def term():
global cur_tok
tmp = factor()
t, v = cur_tok.type, cur_tok.value
while v == '*' or v == '/':
match(tokens_op)
rhs = factor()
e = str(tmp) + str(v) + str(rhs)
tmp = eval(e)
print(e, '=', tmp)
t, v = cur_tok.type, cur_tok.value
return tmp


def factor():
global cur_tok
t, v = cur_tok.type, cur_tok.value
if t in tokens_num:
match(tokens_num)
return v
elif v == '(':
match(tokens_op, '(')
tmp = expr()
match(tokens_op, ')')
return tmp
else:
raise


if __name__ == '__main__':
text = input('calc >')
all_toks = tlex(text)
get_token()
print('**********')
res = expr()
print('**********')
print(text, '=', res)
print('**********')

简要说明

  • match函数
    *
  • expr函数
    *
  • term函数
    *
  • factor函数

结果示例

  • $2*(1.5+3)$

3. LR

ply的基本使用相同

参考代码

4. 参考

用LL(1)递归下降语法器构造一个计算器 - 华子的代码空间 - 博客园

  • Lab
  • Compiler Construction
  • Lab
Python厨书笔记-1
实验:基于对比度增强的数字图像取证
© 2024 何决云 载入天数...