香港六马会开奖结果-六合彩特码资料-本港台最快开奖直播

[2019]让您放心游戏,提供六合彩特码资料最丰厚回馈!,本港台最快开奖直播是为您电脑安全以及资金安全的考虑,所以说压实向感受娱乐首先应该进下载时必不可少的。

香港六马会开奖结果websocket探索其与语音
分类:操作系统

websocket研究其与语音、图片的力量

2015/12/26 · JavaScript · 3 评论 · websocket

原版的书文出处: AlloyTeam   

聊起websocket想比我们不会素不相识,假诺不熟悉的话也没涉及,一句话归纳

“WebSocket protocol 是HTML5一种新的合同。它完毕了浏览器与服务器全双工通讯”

WebSocket相相比较古板那么些服务器推技巧简直好了太多,大家能够挥手向comet和长轮询这个工夫说拜拜啦,庆幸大家生活在具有HTML5的时日~

那篇文章大家将分三有的研究websocket

率先是websocket的布满使用,其次是截然本人创设服务器端websocket,最终是至关心重视要介绍利用websocket制作的四个demo,传输图片和在线语音聊天室,let’s go

一、websocket常见用法

这里介绍两种自己觉着大范围的websocket完结……(瞩目:本文建构在node上下文蒙受

1、socket.io

先给demo

JavaScript

var http = require('http'); var io = require('socket.io'); var server = http.createServer(function(req, res) { res.writeHeader(200, {'content-type': 'text/html;charset="utf-8"'}); res.end(); }).listen(8888); var socket =.io.listen(server); socket.sockets.on('connection', function(socket) { socket.emit('xxx', {options}); socket.on('xxx', function(data) { // do someting }); });

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var http = require('http');
var io = require('socket.io');
 
var server = http.createServer(function(req, res) {
    res.writeHeader(200, {'content-type': 'text/html;charset="utf-8"'});
    res.end();
}).listen(8888);
 
var socket =.io.listen(server);
 
socket.sockets.on('connection', function(socket) {
    socket.emit('xxx', {options});
 
    socket.on('xxx', function(data) {
        // do someting
    });
});

信赖明白websocket的校友不容许不掌握socket.io,因为socket.io太知名了,也很棒,它自个儿对逾期、握手等都做了拍卖。作者预计那也是达成websocket使用最多的措施。socket.io最最最精良的一些正是文雅降级,当浏览器不扶植websocket时,它会在里头高贵降级为长轮询等,顾客和开采者是没有须要关注具体贯彻的,很有益。

可是事情是有两面性的,socket.io因为它的应有尽有也推动了坑的地点,最根本的便是臃肿,它的包裹也给多少推动了相当多的简报冗余,并且高雅降级这一亮点,也伴随浏览器规范化的进行稳步失去了伟大

Chrome Supported in version 4+
Firefox Supported in version 4+
Internet Explorer Supported in version 10+
Opera Supported in version 10+
Safari Supported in version 5+

在这里处不是指谪说socket.io不佳,已经被淘汰了,而是有的时候候大家也能够虚拟部分别样的贯彻~

 

2、http模块

刚巧说了socket.io臃肿,这以后就来讲说便捷的,首先demo

JavaScript

var http = require(‘http’); var server = http.createServer(); server.on(‘upgrade’, function(req) { console.log(req.headers); }); server.listen(8888);

1
2
3
4
5
6
var http = require(‘http’);
var server = http.createServer();
server.on(‘upgrade’, function(req) {
console.log(req.headers);
});
server.listen(8888);

相当的粗略的贯彻,其实socket.io内部对websocket也是那般达成的,可是前面帮大家封装了有个别handle管理,这里大家也得以和煦去丰盛,给出两张socket.io中的源码图

香港六马会开奖结果 1

香港六马会开奖结果 2

 

3、ws模块

前边有个例子会用到,这里就提一下,后边具体看~

 

二、自身完成一套server端websocket

正巧说了几种常见的websocket完毕情势,未来我们驰念,对于开垦者来讲

websocket绝对于守旧http数据交互情势以来,增添了服务器推送的事件,客户端接收到事件再开展对应管理,开荒起来不同实际不是太大呀

那是因为那贰个模块已经帮我们将数量帧深入分析此地的坑都填好了,第2盘部我们将尝试本身制作一套简便的服务器端websocket模块

香港六马会开奖结果,谢谢次碳酸钴的钻研扶持,自个儿在这里处那有些只是简单说下,假诺对此风野趣好奇的请百度【web技术探究所】

温馨完毕服务器端websocket首要有两点,二个是利用net模块接受数据流,还有三个是对照官方的帧结构图剖析数据,完毕这两有的就曾经产生了全套的最底层工作

率先给叁个客户端发送websocket握手报文的抓包内容

客商端代码很简短

JavaScript

ws = new WebSocket("ws://127.0.0.1:8888");

1
ws = new WebSocket("ws://127.0.0.1:8888");

香港六马会开奖结果 3

劳动器端要本着那个key验证,正是讲key加上几个特定的字符串后做三次sha1运算,将其结果调换为base64送回来

JavaScript

var crypto = require('crypto'); var WS = '258EAFA5-E914-47DA-95CA-C5AB0DC85B11'; require('net').createServer(function(o) { var key; o.on('data',function(e) { if(!key) { // 获取发送过来的KEY key = e.toString().match(/Sec-WebSocket-Key: (.+)/)[1]; // 连接上WS这一个字符串,并做壹回sha1运算,最终转变到Base64 key = crypto.createHash('sha1').update(key+WS).digest('base64'); // 输出再次来到给客商端的数据,那一个字段都是必需的 o.write('HTTP/1.1 101 Switching Protocolsrn'); o.write('Upgrade: websocketrn'); o.write('Connection: Upgradern'); // 这一个字段带上服务器管理后的KEY o.write('Sec-WebSocket-Accept: '+key+'rn'); // 输出空行,使HTTP头甘休 o.write('rn'); } }); }).listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
var crypto = require('crypto');
var WS = '258EAFA5-E914-47DA-95CA-C5AB0DC85B11';
 
require('net').createServer(function(o) {
var key;
o.on('data',function(e) {
if(!key) {
// 获取发送过来的KEY
key = e.toString().match(/Sec-WebSocket-Key: (.+)/)[1];
// 连接上WS这个字符串,并做一次sha1运算,最后转换成Base64
key = crypto.createHash('sha1').update(key+WS).digest('base64');
// 输出返回给客户端的数据,这些字段都是必须的
o.write('HTTP/1.1 101 Switching Protocolsrn');
o.write('Upgrade: websocketrn');
o.write('Connection: Upgradern');
// 这个字段带上服务器处理后的KEY
o.write('Sec-WebSocket-Accept: '+key+'rn');
// 输出空行,使HTTP头结束
o.write('rn');
}
});
}).listen(8888);

如此握手部分就早就到位了,前面就是数据帧深入分析与变化的活了

先看下官方提供的帧结构暗示图

香港六马会开奖结果 4

简单来说介绍下

FIN为是还是不是终止的标志

RubiconSV为留下空间,0

opcode标记数据类型,是还是不是分片,是不是二进制深入分析,心跳包等等

提交一张opcode对应图

香港六马会开奖结果 5

MASK是或不是选用掩码

Payload len和前边extend payload length表示数据长度,那些是最劳苦的

PayloadLen独有7位,换来无符号整型的话独有0到127的取值,这么小的数值当然不能描述相当大的数额,由此分明当数码长度小于或等于125时候它才作为数据长度的汇报,如若那一个值为126,则时候背后的多少个字节来积攒数据长度,假若为127则用后边三个字节来存款和储蓄数据长度

Masking-key掩码

上面贴出分析数据帧的代码

JavaScript

function decodeDataFrame(e) { var i = 0, j,s, frame = { FIN: e[i] >> 7, Opcode: e[i++] & 15, Mask: e[i] >> 7, PayloadLength: e[i++] & 0x7F }; if(frame.PayloadLength === 126) { frame.PayloadLength = (e[i++] << 8) + e[i++]; } if(frame.PayloadLength === 127) { i += 4; frame.PayloadLength = (e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8)

  • e[i++]; } if(frame.Mask) { frame.MaskingKey = [e[i++], e[i++], e[i++], e[i++]]; for(j = 0, s = []; j < frame.PayloadLength; j++) { s.push(e[i+j] ^ frame.MaskingKey[j%4]); } } else { s = e.slice(i, i+frame.PayloadLength); } s = new Buffer(s); if(frame.Opcode === 1) { s = s.toString(); } frame.PayloadData = s; return frame; }
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
function decodeDataFrame(e) {
var i = 0,
j,s,
frame = {
FIN: e[i] >> 7,
Opcode: e[i++] & 15,
Mask: e[i] >> 7,
PayloadLength: e[i++] & 0x7F
};
 
if(frame.PayloadLength === 126) {
frame.PayloadLength = (e[i++] << 8) + e[i++];
}
 
if(frame.PayloadLength === 127) {
i += 4;
frame.PayloadLength = (e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8) + e[i++];
}
 
if(frame.Mask) {
frame.MaskingKey = [e[i++], e[i++], e[i++], e[i++]];
 
for(j = 0, s = []; j < frame.PayloadLength; j++) {
s.push(e[i+j] ^ frame.MaskingKey[j%4]);
}
} else {
s = e.slice(i, i+frame.PayloadLength);
}
 
s = new Buffer(s);
 
if(frame.Opcode === 1) {
s = s.toString();
}
 
frame.PayloadData = s;
return frame;
}

接下来是生成数据帧的

JavaScript

function encodeDataFrame(e) { var s = [], o = new Buffer(e.PayloadData), l = o.length; s.push((e.FIN << 7) + e.Opcode); if(l < 126) { s.push(l); } else if(l < 0x10000) { s.push(126, (l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF); } return Buffer.concat([new Buffer(s), o]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
function encodeDataFrame(e) {
var s = [],
o = new Buffer(e.PayloadData),
l = o.length;
 
s.push((e.FIN << 7) + e.Opcode);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), o]);
}

都以比照帧结构暗指图上的去管理,在此不细讲,文章主要在下部分,假设对那块感兴趣的话能够活动web本领钻探所~

 

三、websocket传输图片和websocket语音聊天室

正片环节到了,那篇小说最重视的要么显得一下websocket的一些行使情形

1、传输图片

我们先探究传输图片的步子是何等,首先服务器收到到顾客端央浼,然后读取图片文件,将二进制数据转载给客商端,顾客端如哪处理?当然是使用FileReader对象了

先给顾客端代码

JavaScript

var ws = new WebSocket("ws://xxx.xxx.xxx.xxx:8888"); ws.onopen = function(){ console.log("握手成功"); }; ws.onmessage = function(e) { var reader = new FileReader(); reader.onload = function(event) { var contents = event.target.result; var a = new Image(); a.src = contents; document.body.appendChild(a); } reader.readAsDataU奥迪Q5L(e.data); };

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var ws = new WebSocket("ws://xxx.xxx.xxx.xxx:8888");
 
ws.onopen = function(){
    console.log("握手成功");
};
 
ws.onmessage = function(e) {
    var reader = new FileReader();
    reader.onload = function(event) {
        var contents = event.target.result;
        var a = new Image();
        a.src = contents;
        document.body.appendChild(a);
    }
    reader.readAsDataURL(e.data);
};

收纳到音讯,然后readAsDataU奥迪Q7L,直接将图纸base64加多到页面中

转到服务器端代码

JavaScript

fs.readdir("skyland", function(err, files) { if(err) { throw err; } for(var i = 0; i < files.length; i++) { fs.readFile('skyland/' + files[i], function(err, data) { if(err) { throw err; } o.write(encodeImgFrame(data)); }); } }); function encodeImgFrame(buf) { var s = [], l = buf.length, ret = []; s.push((1 << 7) + 2); if(l < 126) { s.push(l); } else if(l < 0x10000) { s.push(126, (l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF); } return Buffer.concat([new Buffer(s), buf]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
fs.readdir("skyland", function(err, files) {
if(err) {
throw err;
}
for(var i = 0; i < files.length; i++) {
fs.readFile('skyland/' + files[i], function(err, data) {
if(err) {
throw err;
}
 
o.write(encodeImgFrame(data));
});
}
});
 
function encodeImgFrame(buf) {
var s = [],
l = buf.length,
ret = [];
 
s.push((1 << 7) + 2);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), buf]);
}

注意s.push((1 << 7) + 2)这一句,这里特别直接把opcode写死了为2,对于Binary Frame,那样客商端接收到数码是不会尝试实行toString的,不然会报错~

代码相当粗略,在那处向我们分享一下websocket传输图片的速度如何

测验相当多张图纸,总共8.24M

平常静态财富服务器要求20s左右(服务器较远)

cdn需要2.8s左右

那大家的websocket方式吗??!

答案是同样必要20s左右,是还是不是很失望……速度正是慢在传输上,并非服务器读取图片,本机上同一的图纸财富,1s左右能够完结……这样看来数据流也无可奈何冲破间隔的限定进步传输速度

上边大家来拜谒websocket的另一个用法~

 

用websocket搭建语音聊天室

先来整治一下语音聊天室的法力

客户步入频道随后从迈克风输入音频,然后发送给后台转载给频道里面的别的人,别的人接收到音信进行播放

看起来困难在多个地点,第一个是音频的输入,第二是收取到多少流进行广播

先说音频的输入,这里运用了HTML5的getUserMedia方法,可是注意了,以此艺术上线是有沙田区的,最后说,先贴代码

JavaScript

if (navigator.getUserMedia) { navigator.getUserMedia( { audio: true }, function (stream) { var rec = new SRecorder(stream); recorder = rec; }) }

1
2
3
4
5
6
7
8
if (navigator.getUserMedia) {
    navigator.getUserMedia(
        { audio: true },
        function (stream) {
            var rec = new SRecorder(stream);
            recorder = rec;
        })
}

先是个参数是{audio: true},只启用音频,然后创造了三个SRecorder对象,后续的操作基本上都在这里个目标上实行。此时借使代码运维在地点的话浏览器应该升迁您是还是不是启用Mike风输入,明确之后就运行了

接下去大家看下SRecorder构造函数是甚,给出主要的部分

JavaScript

var SRecorder = function(stream) { …… var context = new AudioContext(); var audioInput = context.createMediaStreamSource(stream); var recorder = context.createScriptProcessor(4096, 1, 1); …… }

1
2
3
4
5
6
7
var SRecorder = function(stream) {
    ……
   var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
    ……
}

奥迪oContext是贰个旋律上下文对象,有做过声音过滤管理的校友应该明了“一段音频达到扬声器实行播放在此之前,半路对其进行阻拦,于是我们就收获了点子数据了,这几个拦截职业是由window.奥迪oContext来做的,大家具有对旋律的操作都基于这么些指标”,我们得以因而奥迪oContext创设区别的奥迪(Audi)oNode节点,然后增添滤镜播放特别的响动

录音原理一样,大家也亟需走奥迪oContext,可是多了一步对麦克风音频输入的选择上,实际不是像以前管理音频一下用ajax央求音频的ArrayBuffer对象再decode,迈克风的收受须求用到createMediaStreamSource方法,注意这几个参数正是getUserMedia方法第3个参数的参数

再说createScriptProcessor方法,它官方的表达是:

Creates a ScriptProcessorNode, which can be used for direct audio processing via JavaScript.

——————

包含下便是其一方式是使用JavaScript去管理音频搜聚操作

算是到点子搜集了!胜利就在头里!

接下去让大家把话筒的输入和音频采撷相连起来

JavaScript

audioInput.connect(recorder); recorder.connect(context.destination);

1
2
audioInput.connect(recorder);
recorder.connect(context.destination);

context.destination官方解释如下

The destination property of the AudioContext interface returns an AudioDestinationNoderepresenting the final destination of all audio in the context.

——————

context.destination重返代表在条件中的音频的尾声指标地。

好,到了此时,大家还亟需贰个监听音频收罗的事件

JavaScript

recorder.onaudioprocess = function (e) { audioData.input(e.inputBuffer.getChannelData(0)); }

1
2
3
recorder.onaudioprocess = function (e) {
    audioData.input(e.inputBuffer.getChannelData(0));
}

audioData是多少个对象,那几个是在英特网找的,小编就加了二个clear方法因为前边会用到,主要有那二个encodeWAV方法非常的赞,旁人进行了数11遍的点子压缩和优化,这么些最终会陪伴完整的代码一齐贴出来

那会儿全方位客户步向频道随后从迈克风输入音频环节就早已成功啦,上边就该是向劳动器端发送音频流,稍微有一点点蛋疼的来了,刚才大家说了,websocket通过opcode不一样能够象征回去的多寡是文本依旧二进制数据,而作者辈onaudioprocess中input进去的是数组,最后播放声音要求的是Blob,{type: ‘audio/wav’}的靶子,那样我们就亟须求在出殡和埋葬此前将数组调换到WAV的Blob,此时就用到了地点说的encodeWAV方法

服务器如同很简短,只要转载就行了

本土测量试验确实能够,不过天坑来了!将次第跑在服务器上时候调用getUserMedia方法提醒小编必得在一个平安无事的条件,也正是内需https,那意味着ws也必得换来wss……故此服务器代码就从未行使我们和睦包裹的拉手、深入分析和编码了,代码如下

JavaScript

var https = require('https'); var fs = require('fs'); var ws = require('ws'); var userMap = Object.create(null); var options = { key: fs.readFileSync('./privatekey.pem'), cert: fs.readFileSync('./certificate.pem') }; var server = https.createServer(options, function(req, res) { res.writeHead({ 'Content-Type' : 'text/html' }); fs.readFile('./testaudio.html', function(err, data) { if(err) { return ; } res.end(data); }); }); var wss = new ws.Server({server: server}); wss.on('connection', function(o) { o.on('message', function(message) { if(message.indexOf('user') === 0) { var user = message.split(':')[1]; userMap[user] = o; } else { for(var u in userMap) { userMap[u].send(message); } } }); }); server.listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
var https = require('https');
var fs = require('fs');
var ws = require('ws');
var userMap = Object.create(null);
var options = {
    key: fs.readFileSync('./privatekey.pem'),
    cert: fs.readFileSync('./certificate.pem')
};
var server = https.createServer(options, function(req, res) {
    res.writeHead({
        'Content-Type' : 'text/html'
    });
 
    fs.readFile('./testaudio.html', function(err, data) {
        if(err) {
            return ;
        }
 
        res.end(data);
    });
});
 
var wss = new ws.Server({server: server});
 
wss.on('connection', function(o) {
    o.on('message', function(message) {
if(message.indexOf('user') === 0) {
    var user = message.split(':')[1];
    userMap[user] = o;
} else {
    for(var u in userMap) {
userMap[u].send(message);
    }
}
    });
});
 
server.listen(8888);

代码依旧异常粗略的,使用https模块,然后用了启幕说的ws模块,userMap是模仿的频道,只兑现转载的中坚职能

接纳ws模块是因为它至极https完毕wss实在是太低价了,和逻辑代码0冲突

https的搭建在这里处就不提了,首若是要求私钥、CS奥德赛证书签字和证件文件,感兴趣的校友能够了然下(不过不精晓的话在现网意况也用持续getUserMedia……)

下边是完整的前端代码

JavaScript

var a = document.getElementById('a'); var b = document.getElementById('b'); var c = document.getElementById('c'); navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia; var gRecorder = null; var audio = document.querySelector('audio'); var door = false; var ws = null; b.onclick = function() { if(a.value === '') { alert('请输入客户名'); return false; } if(!navigator.getUserMedia) { alert('抱歉您的设备无爱沙尼亚语音聊天'); return false; } SRecorder.get(function (rec) { gRecorder = rec; }); ws = new WebSocket("wss://x.x.x.x:8888"); ws.onopen = function() { console.log('握手成功'); ws.send('user:' + a.value); }; ws.onmessage = function(e) { receive(e.data); }; document.onkeydown = function(e) { if(e.keyCode === 65) { if(!door) { gRecorder.start(); door = true; } } }; document.onkeyup = function(e) { if(e.keyCode === 65) { if(door) { ws.send(gRecorder.getBlob()); gRecorder.clear(); gRecorder.stop(); door = false; } } } } c.onclick = function() { if(ws) { ws.close(); } } var SRecorder = function(stream) { config = {}; config.sampleBits = config.smapleBits || 8; config.sampleRate = config.sampleRate || (44100 / 6); var context = new 奥迪oContext(); var audioInput = context.createMediaStreamSource(stream); var recorder = context.createScriptProcessor(4096, 1, 1); var audioData = { size: 0 //录音文件长度 , buffer: [] //录音缓存 , input萨姆pleRate: context.sampleRate //输入采集样品率 , inputSampleBits: 16 //输入采样数位 8, 16 , outputSampleRate: config.sampleRate //输出采集样品率 , oututSampleBits: config.sampleBits //输出采集样品数位 8, 16 , clear: function() { this.buffer = []; this.size = 0; } , input: function (data) { this.buffer.push(new Float32Array(data)); this.size += data.length; } , compress: function () { //合併压缩 //合併 var data = new Float32Array(this.size); var offset = 0; for (var i = 0; i < this.buffer.length; i++) { data.set(this.buffer[i], offset); offset += this.buffer[i].length; } //压缩 var compression = parseInt(this.inputSampleRate / this.outputSampleRate); var length = data.length / compression; var result = new Float32Array(length); var index = 0, j = 0; while (index < length) { result[index] = data[j]; j += compression; index++; } return result; } , encodeWAV: function () { var sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate); var sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits); var bytes = this.compress(); var dataLength = bytes.length * (sampleBits / 8); var buffer = new ArrayBuffer(44 + dataLength); var data = new DataView(buffer); var channelCount = 1;//单声道 var offset = 0; var writeString = function (str) { for (var i = 0; i < str.length; i++) { data.setUint8(offset + i, str.charCodeAt(i)); } }; // 财富沟通文件标志符 writeString('HighlanderIFF'); offset += 4; // 下个地点起头到文件尾总字节数,即文件大小-8 data.setUint32(offset, 36 + dataLength, true); offset += 4; // WAV文件声明 writeString('WAVE'); offset += 4; // 波形格式标记 writeString('fmt '); offset += 4; // 过滤字节,平时为 0x10 = 16 data.setUint32(offset, 16, true); offset += 4; // 格式连串 (PCM方式采集样品数据) data.setUint16(offset, 1, true); offset += 2; // 通道数 data.setUint16(offset, channelCount, true); offset += 2; // 采集样品率,每秒样本数,表示每种通道的广播速度 data.setUint32(offset, sampleRate, true); offset += 4; // 波形数据传输率 (每秒平均字节数) 单声道×每秒数据位数×每样本数据位/8 data.setUint32(offset, channelCount * sampleRate * (sampleBits / 8), true); offset += 4; // 快数据调节数 采集样品壹次占用字节数 单声道×每样本的数码位数/8 data.setUint16(offset, channelCount * (sampleBits / 8), true); offset += 2; // 每样本数量位数 data.setUint16(offset, sampleBits, true); offset += 2; // 数据标识符 writeString('data'); offset += 4; // 采集样品数据总量,即数据总大小-44 data.setUint32(offset, dataLength, true); offset += 4; // 写入采集样品数据 if (sampleBits === 8) { for (var i = 0; i < bytes.length; i++, offset++) { var s = Math.max(-1, Math.min(1, bytes[i])); var val = s < 0 ? s * 0x8000 : s * 0x7FFF; val = parseInt(255 / (65535 / (val + 32768))); data.setInt8(offset, val, true); } } else { for (var i = 0; i < bytes.length; i++, offset += 2) { var s = Math.max(-1, Math.min(1, bytes[i])); data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true); } } return new Blob([data], { type: 'audio/wav' }); } }; this.start = function () { audioInput.connect(recorder); recorder.connect(context.destination); } this.stop = function () { recorder.disconnect(); } this.getBlob = function () { return audioData.encodeWAV(); } this.clear = function() { audioData.clear(); } recorder.onaudioprocess = function (e) { audioData.input(e.inputBuffer.getChannelData(0)); } }; SRecorder.get = function (callback) { if (callback) { if (navigator.getUserMedia) { navigator.getUserMedia( { audio: true }, function (stream) { var rec = new SRecorder(stream); callback(rec); }) } } } function receive(e) { audio.src = window.URL.createObjectURL(e); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
var a = document.getElementById('a');
var b = document.getElementById('b');
var c = document.getElementById('c');
 
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia;
 
var gRecorder = null;
var audio = document.querySelector('audio');
var door = false;
var ws = null;
 
b.onclick = function() {
    if(a.value === '') {
        alert('请输入用户名');
        return false;
    }
    if(!navigator.getUserMedia) {
        alert('抱歉您的设备无法语音聊天');
        return false;
    }
 
    SRecorder.get(function (rec) {
        gRecorder = rec;
    });
 
    ws = new WebSocket("wss://x.x.x.x:8888");
 
    ws.onopen = function() {
        console.log('握手成功');
        ws.send('user:' + a.value);
    };
 
    ws.onmessage = function(e) {
        receive(e.data);
    };
 
    document.onkeydown = function(e) {
        if(e.keyCode === 65) {
            if(!door) {
                gRecorder.start();
                door = true;
            }
        }
    };
 
    document.onkeyup = function(e) {
        if(e.keyCode === 65) {
            if(door) {
                ws.send(gRecorder.getBlob());
                gRecorder.clear();
                gRecorder.stop();
                door = false;
            }
        }
    }
}
 
c.onclick = function() {
    if(ws) {
        ws.close();
    }
}
 
var SRecorder = function(stream) {
    config = {};
 
    config.sampleBits = config.smapleBits || 8;
    config.sampleRate = config.sampleRate || (44100 / 6);
 
    var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
 
    var audioData = {
        size: 0          //录音文件长度
        , buffer: []     //录音缓存
        , inputSampleRate: context.sampleRate    //输入采样率
        , inputSampleBits: 16       //输入采样数位 8, 16
        , outputSampleRate: config.sampleRate    //输出采样率
        , oututSampleBits: config.sampleBits       //输出采样数位 8, 16
        , clear: function() {
            this.buffer = [];
            this.size = 0;
        }
        , input: function (data) {
            this.buffer.push(new Float32Array(data));
            this.size += data.length;
        }
        , compress: function () { //合并压缩
            //合并
            var data = new Float32Array(this.size);
            var offset = 0;
            for (var i = 0; i < this.buffer.length; i++) {
                data.set(this.buffer[i], offset);
                offset += this.buffer[i].length;
            }
            //压缩
            var compression = parseInt(this.inputSampleRate / this.outputSampleRate);
            var length = data.length / compression;
            var result = new Float32Array(length);
            var index = 0, j = 0;
            while (index < length) {
                result[index] = data[j];
                j += compression;
                index++;
            }
            return result;
        }
        , encodeWAV: function () {
            var sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate);
            var sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits);
            var bytes = this.compress();
            var dataLength = bytes.length * (sampleBits / 8);
            var buffer = new ArrayBuffer(44 + dataLength);
            var data = new DataView(buffer);
 
            var channelCount = 1;//单声道
            var offset = 0;
 
            var writeString = function (str) {
                for (var i = 0; i < str.length; i++) {
                    data.setUint8(offset + i, str.charCodeAt(i));
                }
            };
 
            // 资源交换文件标识符
            writeString('RIFF'); offset += 4;
            // 下个地址开始到文件尾总字节数,即文件大小-8
            data.setUint32(offset, 36 + dataLength, true); offset += 4;
            // WAV文件标志
            writeString('WAVE'); offset += 4;
            // 波形格式标志
            writeString('fmt '); offset += 4;
            // 过滤字节,一般为 0x10 = 16
            data.setUint32(offset, 16, true); offset += 4;
            // 格式类别 (PCM形式采样数据)
            data.setUint16(offset, 1, true); offset += 2;
            // 通道数
            data.setUint16(offset, channelCount, true); offset += 2;
            // 采样率,每秒样本数,表示每个通道的播放速度
            data.setUint32(offset, sampleRate, true); offset += 4;
            // 波形数据传输率 (每秒平均字节数) 单声道×每秒数据位数×每样本数据位/8
            data.setUint32(offset, channelCount * sampleRate * (sampleBits / 8), true); offset += 4;
            // 快数据调整数 采样一次占用字节数 单声道×每样本的数据位数/8
            data.setUint16(offset, channelCount * (sampleBits / 8), true); offset += 2;
            // 每样本数据位数
            data.setUint16(offset, sampleBits, true); offset += 2;
            // 数据标识符
            writeString('data'); offset += 4;
            // 采样数据总数,即数据总大小-44
            data.setUint32(offset, dataLength, true); offset += 4;
            // 写入采样数据
            if (sampleBits === 8) {
                for (var i = 0; i < bytes.length; i++, offset++) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    var val = s < 0 ? s * 0x8000 : s * 0x7FFF;
                    val = parseInt(255 / (65535 / (val + 32768)));
                    data.setInt8(offset, val, true);
                }
            } else {
                for (var i = 0; i < bytes.length; i++, offset += 2) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true);
                }
            }
 
            return new Blob([data], { type: 'audio/wav' });
        }
    };
 
    this.start = function () {
        audioInput.connect(recorder);
        recorder.connect(context.destination);
    }
 
    this.stop = function () {
        recorder.disconnect();
    }
 
    this.getBlob = function () {
        return audioData.encodeWAV();
    }
 
    this.clear = function() {
        audioData.clear();
    }
 
    recorder.onaudioprocess = function (e) {
        audioData.input(e.inputBuffer.getChannelData(0));
    }
};
 
SRecorder.get = function (callback) {
    if (callback) {
        if (navigator.getUserMedia) {
            navigator.getUserMedia(
                { audio: true },
                function (stream) {
                    var rec = new SRecorder(stream);
                    callback(rec);
                })
        }
    }
}
 
function receive(e) {
    audio.src = window.URL.createObjectURL(e);
}

注意:按住a键说话,放开a键发送

团结有尝试不开关实时对讲,通过setInterval发送,但开掘杂音有一些重,效果糟糕,这几个供给encodeWAV再一层的包装,多去除情状杂音的功用,自个儿挑选了更为方便的开关说话的方式

 

那篇著作里第一展望了websocket的前途,然后遵照正规大家和好尝尝分析和扭转数据帧,对websocket有了更加深一步的精晓

末尾经过三个demo见到了websocket的潜在的能量,关于语音聊天室的demo涉及的较广,未有接触过奥迪(Audi)oContext对象的同班最棒先精通下奥迪(Audi)oContext

小谈起此地就甘休啦~有何样主见和难题招待大家提出来一同商议索求~

 

1 赞 11 收藏 3 评论

香港六马会开奖结果 6

本文由香港六马会开奖结果发布于操作系统,转载请注明出处:香港六马会开奖结果websocket探索其与语音

上一篇:刨根问底HTTP和WebSocket协议 下一篇:如何调试Javascript
猜你喜欢
热门排行
精彩图文