re: richedit研究 – 拷貝&粘貼初步實現效果[未登錄] jacky_zz 2012-08-30 08:52
無法在XP下運行。
re: 因為專注所以專業, EverEdit 2.0版本發布了[未登錄] jacky_zz 2012-06-17 09:46
開源吧
re: 音頻文件頻譜[未登錄] jacky_zz 2012-03-09 18:02
@tfzxyinhao,spek就是開源的啊,vala+GTK開發
re: ffmpeg小試[未登錄] jacky_zz 2011-12-26 15:34
@glueless lace wigs
不是,要是有那本事,就不玩了。
不是,要是有那本事,就不玩了。
re: 基于mplayer的開發(PART III)[未登錄] jacky_zz 2011-12-13 22:56
是實時頻譜分析儀
re: 關于“自己的mp3播放器”的源碼補充[未登錄] jacky_zz 2011-09-19 16:47
@Vincky
flac: http://flac.sourceforge.net
ape(Monkey's Audio): http://www.monkeysaudio.com/developers.html
flac: http://flac.sourceforge.net
ape(Monkey's Audio): http://www.monkeysaudio.com/developers.html
re: 基于Chrome開源提取的界面開發框架 三(.3)[未登錄] jacky_zz 2011-09-16 10:51
提個建議啊,能把bin下的那個wmv換掉嗎??太大了!!!!
re: 基于Chrome開源提取的界面開發框架 三(.3)[未登錄] jacky_zz 2011-09-15 08:45
真快啊!
re: 基于Chrome開源提取的界面開發框架 三(.2.5)[未登錄] Jacky_zz 2011-09-02 23:44
又有新版本發布了,持續關注ing。
re: 我把初戀搞丟了(原創)[未登錄] jacky_zz 2011-08-25 16:51
第一次看到,我也有過類似經歷。
re: 基于Chrome開源提取的界面開發框架 三(.1.5)[未登錄] jacky_zz 2011-06-05 09:40
持續關注ing
re: 基于Chrome開源提取的界面開發框架 三(.1)[未登錄] jacky_zz 2011-05-30 13:34
哦,期待ing。
re: 基于Chrome開源提取的界面開發框架 三(.1)[未登錄] jacky_zz 2011-05-30 08:10
更新到x-framework的svn了嗎?
re: 基于Chrome開源提取的界面開發框架 二(.x) jacky_zz 2011-04-25 14:00
又出新作品了?
re: 用C++寫的程序如何在別的電腦上運行[未登錄] jacky_zz 2011-02-12 12:33
代碼生成選項選擇多線程版本噻,就不需要DLL了
re: 基于mplayer的開發 jacky_zz 2011-02-11 11:04
spectrum_analyzer.dll不開源
re: 基于mplayer的開發 jacky_zz 2011-01-18 17:55
to chris:
我下載了mplayer-ww的源碼,但是編譯不成mplayer-ww的發布版本,無奈下自己做了自己的界面。
我下載了mplayer-ww的源碼,但是編譯不成mplayer-ww的發布版本,無奈下自己做了自己的界面。
re: 基于mplayer的開發 jacky_zz 2011-01-18 17:53
to gaimor:
我的QQ是59502553
我的QQ是59502553
re: 基于Ffmpeg解碼器的簡單播放器(a simple audio player based on Ffmpeg) jacky_zz 2010-04-14 10:34
使用什么不重要,而是你能從中得到什么,每個人的側重點不一樣,選擇的方向就不一樣,得到的結果就不一樣,但有一樣是相同的,No Pains, No Gains.
re: 基于Ffmpeg解碼器的簡單播放器(a simple audio player based on Ffmpeg) jacky_zz 2010-04-14 08:28
TO欣萌:沒啊,別人怎么評論那是他的想法,我堅持走自己的路。
re: 基于Ffmpeg解碼器的簡單播放器(a simple audio player based on Ffmpeg) jacky_zz 2010-04-13 14:30
其實我發布這個程序不是為了吸引網友的眼球,更多的是為了記錄自己在學習過程中的一點心得,以便過了N年后回過頭來看看自己曾經走過的路,我選擇的更多的是默默的去探索,而不愿去更多的爭辯,于事無補。
re: 基于Ffmpeg解碼器的簡單播放器(a simple audio player based on Ffmpeg) jacky_zz 2010-04-12 16:23
非得自己寫解碼器才叫有技術含量?
Show一個來看看你所謂的技術含量?
Show一個來看看你所謂的技術含量?
re: 自己的mp3播放器【帶頻譜】[未登錄] jacky_zz 2009-12-24 09:13
你做了這方面的工作了嗎?
re: 自己的mp3播放器【帶頻譜】 jacky_zz 2009-12-23 10:07
經測試,4608這個值是不會出現overrun的,我參看了很多開源的winamp插件,這個值出現的頻率很高。
PS:你提到的這個方案我原來也考慮過,好像效果并不是很好,顯示的頻譜與當前播放好像不符合。用waveOut呢,延時比較大;用DirectSound,采用通知點的方式呢,也不是最好的處理辦法。如果能計算出playback的數值,那就是最準確的了。不知你有何更好的辦法??
我的QQ是59502553,交流下?
PS:你提到的這個方案我原來也考慮過,好像效果并不是很好,顯示的頻譜與當前播放好像不符合。用waveOut呢,延時比較大;用DirectSound,采用通知點的方式呢,也不是最好的處理辦法。如果能計算出playback的數值,那就是最準確的了。不知你有何更好的辦法??
我的QQ是59502553,交流下?
re: ffmpeg_play on ubuntu 9.10 jacky_zz 2009-12-14 22:26
你好,我現在都不想自己去寫具體格式的音頻解碼代碼了,我想采用ffmpeg作為解碼后臺,這樣做有幾個好處:一是可以把程序開發的重點從解碼轉移到程序架構的設計上來;二是ffmpeg支持的格式也比較多,這樣一來程序就可以播放多種格式的音頻文件了,我現在測試了aac,ape,flac,mp3,mp4,mpc,ogg,wma,wav,效果還不錯;三是編寫幾個封裝DLL,用于ffmpeg和DirectSound操作的封裝,這樣就越發模塊化了。呵呵,這個只是我的一個初步的想法,等封裝完成,那么剩下的工作就是界面編程了,有興趣的話,一起來整整?
PS:最近幾天準備用VC6來開發程序,在多個系統上測試,包括Ubuntu(9.10,Wine v1.01)。
我的QQ是:59502553。
PS:最近幾天準備用VC6來開發程序,在多個系統上測試,包括Ubuntu(9.10,Wine v1.01)。
我的QQ是:59502553。
re: ffmpeg_play on ubuntu 9.10 jacky_zz 2009-12-14 15:09
你提到的這個問題,在codeproject有例子的,無非需要做的自己開發一個Filter,并注冊到系統,以便系統能以你開發的Filter來解碼輸入的文件,并將解碼數據返回給應用程序進行播放。
例子的地址為:http://www.codeproject.com/KB/audio-video/PeakMeterCS.aspx
例子的地址為:http://www.codeproject.com/KB/audio-video/PeakMeterCS.aspx
re: ffmpeg小試 jacky_zz 2009-11-25 11:03
找到不能播放aac和ogg的問題了,原因是在ffmpeg里分配內存需要用av_malloc,釋放內存要用av_free,因為windows和linux下內存分配存在不同,而ffmpeg在解碼的時候是要檢查內存是否對齊(內存對齊可以加快CPU的處理速度),所以在程序里在window環境下通過malloc或者通過數組的方式分配的內存不完全是內存對齊的,所以在遇到aac和ogg這種幀長度與其它音頻格式幀長度不一致時,就有可能在運行時出錯。修改后的代碼如下,讀者參照著自己修改即可。
#include <stdio.h>
#include <stdlib.h>
#include <windows.h>
#include <mmsystem.h>
#pragma comment(lib, "winmm.lib")
#ifdef __cplusplus
extern "C" {
#endif
#include "./include/avcodec.h"
#include "./include/avformat.h"
#include "./include/avutil.h"
#include "./include/mem.h"
#ifdef __cplusplus
}
#endif
#define BLOCK_SIZE 4608
#define BLOCK_COUNT 20
HWAVEOUT hWaveOut = NULL;
static void CALLBACK waveOutProc(HWAVEOUT, UINT, DWORD, DWORD, DWORD);
static WAVEHDR* allocateBlocks(int size, int count);
static void freeBlocks(WAVEHDR* blockArray);
static void writeAudio(HWAVEOUT hWaveOut, LPSTR data, int size);
static CRITICAL_SECTION waveCriticalSection;
static WAVEHDR* waveBlocks;
static volatile unsigned int waveFreeBlockCount;
static int waveCurrentBlock;
typedef struct AudioState {
AVFormatContext* pFmtCtx;
AVCodecContext* pCodecCtx;
AVCodec* pCodec;
//uint8_t* audio_buf1[(AVCODEC_MAX_AUDIO_FRAME_SIZE * 3) / 2];
uint8_t* audio_buf1;
uint8_t* audio_buf;
unsigned int audio_buf_size; /* in bytes */
unsigned int buffer_size;
int audio_buf_index; /* in bytes */
AVPacket audio_pkt_temp;
AVPacket audio_pkt;
uint8_t* audio_pkt_data;
int audio_pkt_size;
int stream_index;
} AudioState;
int audio_decode_frame(AudioState* pState) {
AVPacket* pkt_temp = &pState->audio_pkt_temp;
AVPacket* pkt = &pState->audio_pkt;
AVCodecContext* dec= pState->pCodecCtx;
int len = 0, data_size = sizeof(pState->audio_buf1);
int err = 0;
for( ; ; ) {
while (pkt_temp->size > 0) {
// data_size = sizeof(pState->audio_buf1);
data_size = pState->buffer_size;
len = avcodec_decode_audio3(dec, (int16_t*)pState->audio_buf1, &data_size, pkt_temp);
if (len < 0) {
pkt_temp->size = 0;
break;
}
pkt_temp->data += len;
pkt_temp->size -= len;
if (data_size <= 0)
continue;
pState->audio_buf = pState->audio_buf1;
return data_size;
}
if (pkt->data)
av_free_packet(pkt);
if((err = av_read_frame(pState->pFmtCtx, pkt)) < 0)
return -1;
pkt_temp->data = pkt->data;
pkt_temp->size = pkt->size;
}
return -1;
}
int main(int argc, char* argv[]) {
int err = 0;
AudioState audio_state = {0};
unsigned int i = 0;
unsigned int ready = 0;
OPENFILENAME ofn = {0};
char filename[MAX_PATH];
WAVEFORMATEX wfx = {0};
uint8_t buffer[BLOCK_SIZE];
uint8_t* pbuffer = buffer;
AVInputFormat* iformat = NULL;
int audio_size = 0, data_size = 0;
int len = 0, len1 = 0, eof = 0, size = 0;
memset(&ofn, 0, sizeof(OPENFILENAME));
ofn.lStructSize = sizeof(ofn);
ofn.hwndOwner = GetDesktopWindow();
ofn.lpstrFile = filename;
ofn.lpstrFile[0] = '\0';
ofn.nMaxFile = sizeof(filename) / sizeof(filename[0]);
ofn.lpstrFilter = _TEXT("All support files\0*.aac;*.ape;*.flac;*.mp3;*.mp4;*.mpc;*.ogg;*.tta;*.wma;*.wav\0");
ofn.nFilterIndex = 1;
ofn.lpstrFileTitle = NULL;
ofn.nMaxFileTitle = 0;
ofn.lpstrInitialDir = NULL;
ofn.Flags = OFN_PATHMUSTEXIST | OFN_FILEMUSTEXIST;
if (GetOpenFileName(&ofn) == FALSE)
return 0;
av_register_all();
err = av_open_input_file(&audio_state.pFmtCtx, filename, NULL, 0, NULL);
if(err < 0) {
printf("can not open file %s.\n", filename);
return -1;
}
err = av_find_stream_info(audio_state.pFmtCtx);
if(err < 0) {
printf("can not find stream info of file %s.\n", filename);
return -1;
}
for(i = 0; i < audio_state.pFmtCtx->nb_streams; i++) {
if(audio_state.pFmtCtx->streams[i]->codec->codec_type == CODEC_TYPE_AUDIO) {
audio_state.pCodecCtx = audio_state.pFmtCtx->streams[i]->codec;
audio_state.stream_index = i;
ready = 1;
break;
}
}
if(!ready)
return -1;
audio_state.pCodec = avcodec_find_decoder(audio_state.pCodecCtx->codec_id);
if(!audio_state.pCodec || avcodec_open(audio_state.pCodecCtx, audio_state.pCodec) < 0)
return -1;
wfx.nSamplesPerSec = audio_state.pCodecCtx->sample_rate;
switch(audio_state.pCodecCtx->sample_fmt)
{
case SAMPLE_FMT_U8:
wfx.wBitsPerSample = 8;
break;
case SAMPLE_FMT_S16:
wfx.wBitsPerSample = 16;
break;
case SAMPLE_FMT_S32:
wfx.wBitsPerSample = 32;
break;
case SAMPLE_FMT_FLT:
wfx.wBitsPerSample = sizeof(double) * 8;
break;
default:
wfx.wBitsPerSample = 0;
break;
}
wfx.nChannels = FFMIN(2, audio_state.pCodecCtx->channels);
wfx.cbSize = 0;
wfx.wFormatTag = WAVE_FORMAT_PCM;
wfx.nBlockAlign = (wfx.wBitsPerSample * wfx.nChannels) >> 3;
wfx.nAvgBytesPerSec = wfx.nBlockAlign * wfx.nSamplesPerSec;
waveBlocks = allocateBlocks(BLOCK_SIZE, BLOCK_COUNT);
waveFreeBlockCount = BLOCK_COUNT;
waveCurrentBlock = 0;
InitializeCriticalSection(&waveCriticalSection);
// open wave out device
if(waveOutOpen(&hWaveOut, WAVE_MAPPER, &wfx, (DWORD_PTR)waveOutProc,
(DWORD_PTR)&waveFreeBlockCount, CALLBACK_FUNCTION) != MMSYSERR_NOERROR) {
fprintf(stderr, "%s: unable to open wave mapper device\n", argv[0]);
ExitProcess(1);
}
// allocate memory
audio_state.audio_buf1 =(uint8_t*)av_malloc(buffer_size);
audio_state.buffer_size = buffer_size;
// play loop
for( ; ; ) {
len = BLOCK_SIZE;
size = 0;
pbuffer = buffer;
if(eof)
break;
while(len > 0) {
if(audio_state.audio_buf_index >= (int)audio_state.audio_buf_size) {
audio_size = audio_decode_frame(&audio_state);
if(audio_size < 0) {
if(size > 0)
break;
eof = 1;
break;
}
audio_state.audio_buf_size = audio_size;
audio_state.audio_buf_index = 0;
}
len1 = audio_state.audio_buf_size - audio_state.audio_buf_index;
if(len1 > len)
len1 = len;
memcpy(pbuffer, (uint8_t *)audio_state.audio_buf + audio_state.audio_buf_index, len1);
len -= len1;
pbuffer += len1;
size += len1;
audio_state.audio_buf_index += len1;
}
writeAudio(hWaveOut, (char*)buffer, size);
}
// free allocated memory
av_free(audio_state.audio_buf1);
audio_state.audio_buf1 = NULL;
// wait for complete
for( ; ; ) {
if(waveFreeBlockCount >= BLOCK_COUNT)
break;
Sleep(10);
}
for(i = 0; i < waveFreeBlockCount; i++)
if(waveBlocks[i].dwFlags & WHDR_PREPARED)
waveOutUnprepareHeader(hWaveOut, &waveBlocks[i], sizeof(WAVEHDR));
DeleteCriticalSection(&waveCriticalSection);
freeBlocks(waveBlocks);
waveOutClose(hWaveOut);
avcodec_close(audio_state.pCodecCtx);
system("pause");
return 0;
}
static void writeAudio(HWAVEOUT hWaveOut, LPSTR data, int size)
{
WAVEHDR* current;
int remain;
current = &waveBlocks[waveCurrentBlock];
while(size > 0) {
/*
* first make sure the header we're going to use is unprepared
*/
if(current->dwFlags & WHDR_PREPARED)
waveOutUnprepareHeader(hWaveOut, current, sizeof(WAVEHDR));
if(size < (int)(BLOCK_SIZE - current->dwUser)) {
memcpy(current->lpData + current->dwUser, data, size);
current->dwUser += size;
break;
}
remain = BLOCK_SIZE - current->dwUser;
memcpy(current->lpData + current->dwUser, data, remain);
size -= remain;
data += remain;
current->dwBufferLength = BLOCK_SIZE;
waveOutPrepareHeader(hWaveOut, current, sizeof(WAVEHDR));
waveOutWrite(hWaveOut, current, sizeof(WAVEHDR));
EnterCriticalSection(&waveCriticalSection);
waveFreeBlockCount--;
LeaveCriticalSection(&waveCriticalSection);
/*
* wait for a block to become free
*/
while(!waveFreeBlockCount)
Sleep(10);
/*
* point to the next block
*/
waveCurrentBlock++;
waveCurrentBlock %= BLOCK_COUNT;
current = &waveBlocks[waveCurrentBlock];
current->dwUser = 0;
}
}
static WAVEHDR* allocateBlocks(int size, int count)
{
char* buffer;
int i;
WAVEHDR* blocks;
DWORD totalBufferSize = (size + sizeof(WAVEHDR)) * count;
/*
* allocate memory for the entire set in one go
*/
if((buffer = (char*)HeapAlloc(
GetProcessHeap(),
HEAP_ZERO_MEMORY,
totalBufferSize
)) == NULL) {
fprintf(stderr, "Memory allocation error\n");
ExitProcess(1);
}
/*
* and set up the pointers to each bit
*/
blocks = (WAVEHDR*)buffer;
buffer += sizeof(WAVEHDR) * count;
for(i = 0; i < count; i++) {
blocks[i].dwBufferLength = size;
blocks[i].lpData = buffer;
buffer += size;
}
return blocks;
}
static void freeBlocks(WAVEHDR* blockArray)
{
/*
* and this is why allocateBlocks works the way it does
*/
HeapFree(GetProcessHeap(), 0, blockArray);
}
static void CALLBACK waveOutProc(
HWAVEOUT hWaveOut,
UINT uMsg,
DWORD dwInstance,
DWORD dwParam1,
DWORD dwParam2
)
{
int* freeBlockCounter = (int*)dwInstance;
/*
* ignore calls that occur due to opening and closing the
* device.
*/
if(uMsg != WOM_DONE)
return;
EnterCriticalSection(&waveCriticalSection);
(*freeBlockCounter)++;
LeaveCriticalSection(&waveCriticalSection);
}
#include <stdio.h>
#include <stdlib.h>
#include <windows.h>
#include <mmsystem.h>
#pragma comment(lib, "winmm.lib")
#ifdef __cplusplus
extern "C" {
#endif
#include "./include/avcodec.h"
#include "./include/avformat.h"
#include "./include/avutil.h"
#include "./include/mem.h"
#ifdef __cplusplus
}
#endif
#define BLOCK_SIZE 4608
#define BLOCK_COUNT 20
HWAVEOUT hWaveOut = NULL;
static void CALLBACK waveOutProc(HWAVEOUT, UINT, DWORD, DWORD, DWORD);
static WAVEHDR* allocateBlocks(int size, int count);
static void freeBlocks(WAVEHDR* blockArray);
static void writeAudio(HWAVEOUT hWaveOut, LPSTR data, int size);
static CRITICAL_SECTION waveCriticalSection;
static WAVEHDR* waveBlocks;
static volatile unsigned int waveFreeBlockCount;
static int waveCurrentBlock;
typedef struct AudioState {
AVFormatContext* pFmtCtx;
AVCodecContext* pCodecCtx;
AVCodec* pCodec;
//uint8_t* audio_buf1[(AVCODEC_MAX_AUDIO_FRAME_SIZE * 3) / 2];
uint8_t* audio_buf1;
uint8_t* audio_buf;
unsigned int audio_buf_size; /* in bytes */
unsigned int buffer_size;
int audio_buf_index; /* in bytes */
AVPacket audio_pkt_temp;
AVPacket audio_pkt;
uint8_t* audio_pkt_data;
int audio_pkt_size;
int stream_index;
} AudioState;
int audio_decode_frame(AudioState* pState) {
AVPacket* pkt_temp = &pState->audio_pkt_temp;
AVPacket* pkt = &pState->audio_pkt;
AVCodecContext* dec= pState->pCodecCtx;
int len = 0, data_size = sizeof(pState->audio_buf1);
int err = 0;
for( ; ; ) {
while (pkt_temp->size > 0) {
// data_size = sizeof(pState->audio_buf1);
data_size = pState->buffer_size;
len = avcodec_decode_audio3(dec, (int16_t*)pState->audio_buf1, &data_size, pkt_temp);
if (len < 0) {
pkt_temp->size = 0;
break;
}
pkt_temp->data += len;
pkt_temp->size -= len;
if (data_size <= 0)
continue;
pState->audio_buf = pState->audio_buf1;
return data_size;
}
if (pkt->data)
av_free_packet(pkt);
if((err = av_read_frame(pState->pFmtCtx, pkt)) < 0)
return -1;
pkt_temp->data = pkt->data;
pkt_temp->size = pkt->size;
}
return -1;
}
int main(int argc, char* argv[]) {
int err = 0;
AudioState audio_state = {0};
unsigned int i = 0;
unsigned int ready = 0;
OPENFILENAME ofn = {0};
char filename[MAX_PATH];
WAVEFORMATEX wfx = {0};
uint8_t buffer[BLOCK_SIZE];
uint8_t* pbuffer = buffer;
AVInputFormat* iformat = NULL;
int audio_size = 0, data_size = 0;
int len = 0, len1 = 0, eof = 0, size = 0;
memset(&ofn, 0, sizeof(OPENFILENAME));
ofn.lStructSize = sizeof(ofn);
ofn.hwndOwner = GetDesktopWindow();
ofn.lpstrFile = filename;
ofn.lpstrFile[0] = '\0';
ofn.nMaxFile = sizeof(filename) / sizeof(filename[0]);
ofn.lpstrFilter = _TEXT("All support files\0*.aac;*.ape;*.flac;*.mp3;*.mp4;*.mpc;*.ogg;*.tta;*.wma;*.wav\0");
ofn.nFilterIndex = 1;
ofn.lpstrFileTitle = NULL;
ofn.nMaxFileTitle = 0;
ofn.lpstrInitialDir = NULL;
ofn.Flags = OFN_PATHMUSTEXIST | OFN_FILEMUSTEXIST;
if (GetOpenFileName(&ofn) == FALSE)
return 0;
av_register_all();
err = av_open_input_file(&audio_state.pFmtCtx, filename, NULL, 0, NULL);
if(err < 0) {
printf("can not open file %s.\n", filename);
return -1;
}
err = av_find_stream_info(audio_state.pFmtCtx);
if(err < 0) {
printf("can not find stream info of file %s.\n", filename);
return -1;
}
for(i = 0; i < audio_state.pFmtCtx->nb_streams; i++) {
if(audio_state.pFmtCtx->streams[i]->codec->codec_type == CODEC_TYPE_AUDIO) {
audio_state.pCodecCtx = audio_state.pFmtCtx->streams[i]->codec;
audio_state.stream_index = i;
ready = 1;
break;
}
}
if(!ready)
return -1;
audio_state.pCodec = avcodec_find_decoder(audio_state.pCodecCtx->codec_id);
if(!audio_state.pCodec || avcodec_open(audio_state.pCodecCtx, audio_state.pCodec) < 0)
return -1;
wfx.nSamplesPerSec = audio_state.pCodecCtx->sample_rate;
switch(audio_state.pCodecCtx->sample_fmt)
{
case SAMPLE_FMT_U8:
wfx.wBitsPerSample = 8;
break;
case SAMPLE_FMT_S16:
wfx.wBitsPerSample = 16;
break;
case SAMPLE_FMT_S32:
wfx.wBitsPerSample = 32;
break;
case SAMPLE_FMT_FLT:
wfx.wBitsPerSample = sizeof(double) * 8;
break;
default:
wfx.wBitsPerSample = 0;
break;
}
wfx.nChannels = FFMIN(2, audio_state.pCodecCtx->channels);
wfx.cbSize = 0;
wfx.wFormatTag = WAVE_FORMAT_PCM;
wfx.nBlockAlign = (wfx.wBitsPerSample * wfx.nChannels) >> 3;
wfx.nAvgBytesPerSec = wfx.nBlockAlign * wfx.nSamplesPerSec;
waveBlocks = allocateBlocks(BLOCK_SIZE, BLOCK_COUNT);
waveFreeBlockCount = BLOCK_COUNT;
waveCurrentBlock = 0;
InitializeCriticalSection(&waveCriticalSection);
// open wave out device
if(waveOutOpen(&hWaveOut, WAVE_MAPPER, &wfx, (DWORD_PTR)waveOutProc,
(DWORD_PTR)&waveFreeBlockCount, CALLBACK_FUNCTION) != MMSYSERR_NOERROR) {
fprintf(stderr, "%s: unable to open wave mapper device\n", argv[0]);
ExitProcess(1);
}
// allocate memory
audio_state.audio_buf1 =(uint8_t*)av_malloc(buffer_size);
audio_state.buffer_size = buffer_size;
// play loop
for( ; ; ) {
len = BLOCK_SIZE;
size = 0;
pbuffer = buffer;
if(eof)
break;
while(len > 0) {
if(audio_state.audio_buf_index >= (int)audio_state.audio_buf_size) {
audio_size = audio_decode_frame(&audio_state);
if(audio_size < 0) {
if(size > 0)
break;
eof = 1;
break;
}
audio_state.audio_buf_size = audio_size;
audio_state.audio_buf_index = 0;
}
len1 = audio_state.audio_buf_size - audio_state.audio_buf_index;
if(len1 > len)
len1 = len;
memcpy(pbuffer, (uint8_t *)audio_state.audio_buf + audio_state.audio_buf_index, len1);
len -= len1;
pbuffer += len1;
size += len1;
audio_state.audio_buf_index += len1;
}
writeAudio(hWaveOut, (char*)buffer, size);
}
// free allocated memory
av_free(audio_state.audio_buf1);
audio_state.audio_buf1 = NULL;
// wait for complete
for( ; ; ) {
if(waveFreeBlockCount >= BLOCK_COUNT)
break;
Sleep(10);
}
for(i = 0; i < waveFreeBlockCount; i++)
if(waveBlocks[i].dwFlags & WHDR_PREPARED)
waveOutUnprepareHeader(hWaveOut, &waveBlocks[i], sizeof(WAVEHDR));
DeleteCriticalSection(&waveCriticalSection);
freeBlocks(waveBlocks);
waveOutClose(hWaveOut);
avcodec_close(audio_state.pCodecCtx);
system("pause");
return 0;
}
static void writeAudio(HWAVEOUT hWaveOut, LPSTR data, int size)
{
WAVEHDR* current;
int remain;
current = &waveBlocks[waveCurrentBlock];
while(size > 0) {
/*
* first make sure the header we're going to use is unprepared
*/
if(current->dwFlags & WHDR_PREPARED)
waveOutUnprepareHeader(hWaveOut, current, sizeof(WAVEHDR));
if(size < (int)(BLOCK_SIZE - current->dwUser)) {
memcpy(current->lpData + current->dwUser, data, size);
current->dwUser += size;
break;
}
remain = BLOCK_SIZE - current->dwUser;
memcpy(current->lpData + current->dwUser, data, remain);
size -= remain;
data += remain;
current->dwBufferLength = BLOCK_SIZE;
waveOutPrepareHeader(hWaveOut, current, sizeof(WAVEHDR));
waveOutWrite(hWaveOut, current, sizeof(WAVEHDR));
EnterCriticalSection(&waveCriticalSection);
waveFreeBlockCount--;
LeaveCriticalSection(&waveCriticalSection);
/*
* wait for a block to become free
*/
while(!waveFreeBlockCount)
Sleep(10);
/*
* point to the next block
*/
waveCurrentBlock++;
waveCurrentBlock %= BLOCK_COUNT;
current = &waveBlocks[waveCurrentBlock];
current->dwUser = 0;
}
}
static WAVEHDR* allocateBlocks(int size, int count)
{
char* buffer;
int i;
WAVEHDR* blocks;
DWORD totalBufferSize = (size + sizeof(WAVEHDR)) * count;
/*
* allocate memory for the entire set in one go
*/
if((buffer = (char*)HeapAlloc(
GetProcessHeap(),
HEAP_ZERO_MEMORY,
totalBufferSize
)) == NULL) {
fprintf(stderr, "Memory allocation error\n");
ExitProcess(1);
}
/*
* and set up the pointers to each bit
*/
blocks = (WAVEHDR*)buffer;
buffer += sizeof(WAVEHDR) * count;
for(i = 0; i < count; i++) {
blocks[i].dwBufferLength = size;
blocks[i].lpData = buffer;
buffer += size;
}
return blocks;
}
static void freeBlocks(WAVEHDR* blockArray)
{
/*
* and this is why allocateBlocks works the way it does
*/
HeapFree(GetProcessHeap(), 0, blockArray);
}
static void CALLBACK waveOutProc(
HWAVEOUT hWaveOut,
UINT uMsg,
DWORD dwInstance,
DWORD dwParam1,
DWORD dwParam2
)
{
int* freeBlockCounter = (int*)dwInstance;
/*
* ignore calls that occur due to opening and closing the
* device.
*/
if(uMsg != WOM_DONE)
return;
EnterCriticalSection(&waveCriticalSection);
(*freeBlockCounter)++;
LeaveCriticalSection(&waveCriticalSection);
}
re: ffmpeg小試 jacky_zz 2009-11-24 21:02
和我的程序有關系嗎?況且我的這個程序附帶了源碼,雖然簡單。
re: 自己的mp3播放器【帶頻譜】 jacky_zz 2009-11-09 10:21
流程:
(1)從文件讀取pcm數據;
(2)將pcm數據寫入到播放設備(waveOut或DirectSound);
(3)將pcm數據同步到DSP(FFT,繪圖)。
其中:第一步,讀取的數據不能太大,這個將直接影響后面2步的延時時間,延時時間越大,就不“實時”了,我在網上查的數據量大小是4608字節;第二步是標準操作,沒有什么特別的;第三步,包含的工作有對pcm數據的FFT計算,以及頻譜繪圖。
(1)從文件讀取pcm數據;
(2)將pcm數據寫入到播放設備(waveOut或DirectSound);
(3)將pcm數據同步到DSP(FFT,繪圖)。
其中:第一步,讀取的數據不能太大,這個將直接影響后面2步的延時時間,延時時間越大,就不“實時”了,我在網上查的數據量大小是4608字節;第二步是標準操作,沒有什么特別的;第三步,包含的工作有對pcm數據的FFT計算,以及頻譜繪圖。
re: 關于“自己的mp3播放器”的源碼補充 jacky_zz 2009-10-10 15:03
唉……是啊,最近還忙,只有慢慢的靠著回憶來完善了。
re: 自己的mp3播放器【帶頻譜】 jacky_zz 2009-09-18 17:15
TO ALL:
近期硬盤分區表損壞,所有數據全部丟失(損失慘重)!!!包括AudioPlayer的源碼,現在僅存的源碼在www.codeproject.com上可以下載,感謝網友一直以來對此程序的關心,本打算公開,現在卻因為硬盤問題而無法實現,深表歉意。
jacky_zz
2009-09-18
近期硬盤分區表損壞,所有數據全部丟失(損失慘重)!!!包括AudioPlayer的源碼,現在僅存的源碼在www.codeproject.com上可以下載,感謝網友一直以來對此程序的關心,本打算公開,現在卻因為硬盤問題而無法實現,深表歉意。
jacky_zz
2009-09-18
re: 自己的mp3播放器【帶頻譜】 jacky_zz 2009-08-12 11:04
TO lyon:
嗯,這個只能是慢慢的去實驗才能得到最終的效果。
PS:通過QQ可以和我聯系,59502553。
嗯,這個只能是慢慢的去實驗才能得到最終的效果。
PS:通過QQ可以和我聯系,59502553。
re: 自己的mp3播放器【帶頻譜】 jacky_zz 2009-08-10 14:49
DSound的緩沖區大小與讀寫緩沖區大小無關,讀寫緩沖區越大,解碼耗費的時間就多,反之就小。而DSound的緩沖區一般都設置為兩秒的數據量。而頻譜分析,在我的實例里我從環形緩沖區(我設置為1秒的數據量)獲取512字節的數據,通過FFT,再對前256(也就是總數據量512的一半)個數據分析,繪圖。
re: 自己的mp3播放器【帶頻譜】 jacky_zz 2009-08-03 08:34
TO lyon:
獲取數據的原理,我在文章里已提到,我在google上查到一篇文章,是原Winamp的作者寫的,他提到,做實時頻譜分析,首先需要通過FFT轉換,而FFT轉換的計算量與傳入的數據長度成正比,也就是說你傳入的數據越多,計算量就越大,繼而花費CPU的時間就越多,為了減少因為FFT的計算量,就需要減少傳入的數據量,但使用waveOutXXX或DirectSound輸出時,PCM數據量太少的話,是會出現斷音的,經過作者不斷的測試,終于找到一個合適的數值,就是4608。也就是你每次先獲取4608個PCM數據,先將PCM數據輸出到waveOutXXX或DirectSound,然后通過線程同步的方式將PCM數據傳入到頻譜分析線程,此線程負責FFT計算,然后繪圖。
獲取數據的原理,我在文章里已提到,我在google上查到一篇文章,是原Winamp的作者寫的,他提到,做實時頻譜分析,首先需要通過FFT轉換,而FFT轉換的計算量與傳入的數據長度成正比,也就是說你傳入的數據越多,計算量就越大,繼而花費CPU的時間就越多,為了減少因為FFT的計算量,就需要減少傳入的數據量,但使用waveOutXXX或DirectSound輸出時,PCM數據量太少的話,是會出現斷音的,經過作者不斷的測試,終于找到一個合適的數值,就是4608。也就是你每次先獲取4608個PCM數據,先將PCM數據輸出到waveOutXXX或DirectSound,然后通過線程同步的方式將PCM數據傳入到頻譜分析線程,此線程負責FFT計算,然后繪圖。
re: 自己的mp3播放器【帶頻譜】 jacky_zz 2009-08-03 08:27
To lyon:
你好,我現在的實現在實時性上是達到了目的,但現在我現在的這個實現也存在一定的問題:在單CPU的計算機上,播放線程和頻譜線程CPU占用率較高,15~30%之間;在雙CPU的計算機上(我的)測試呢,沒有啟動QQ2009的情況呢,CPU占用率在0~3%之間,一旦啟動QQ2009,一下子就飚升上去了,在10~25%之間。
我把獲取實時的代碼貼在這里:
===========System.h===========
#pragma once
#ifndef INCLUDE_SYSTEM
#define INCLUDE_SYSTEM
typedef __int64 jlong;
typedef unsigned int juint;
typedef unsigned __int64 julong;
typedef long jint;
typedef signed char jbyte;
#define CONST64(x) (x ## LL)
#define NANOS_PER_SEC CONST64(1000000000)
#define NANOS_PER_MILLISEC 1000000
jlong as_long(LARGE_INTEGER x);
void set_high(jlong* value, jint high);
void set_low(jlong* value, jint low);
class System
{
private:
static jlong frequency;
static int ready;
static void init()
{
LARGE_INTEGER liFrequency = {0};
QueryPerformanceFrequency(&liFrequency);
frequency = as_long(liFrequency);
ready = 1;
}
public:
static jlong nanoTime()
{
if(ready != 1)
init();
LARGE_INTEGER liCounter = {0};
QueryPerformanceCounter(&liCounter);
double current = as_long(liCounter);
double freq = frequency;
return (jlong)((current / freq) * NANOS_PER_SEC);
}
};
#endif
===========System.cpp===========
#include "System.h"
inline void set_low(jlong* value, jint low)
{
*value &= (jlong)0xffffffff << 32;
*value |= (jlong)(julong)(juint)low;
}
inline void set_high(jlong* value, jint high)
{
*value &= (jlong)(julong)(juint)0xffffffff;
*value |= (jlong)high << 32;
}
jlong as_long(LARGE_INTEGER x) {
jlong result = 0; // initialization to avoid warning
set_high(&result, x.HighPart);
set_low(&result, x.LowPart);
return result;
}
LARGE_INTEGER liFrequency = {0};
BOOL gSupportPerformanceFrequency = QueryPerformanceFrequency(&liFrequency);
jlong System::frequency = as_long(liFrequency);
int System::ready = 1;
你好,我現在的實現在實時性上是達到了目的,但現在我現在的這個實現也存在一定的問題:在單CPU的計算機上,播放線程和頻譜線程CPU占用率較高,15~30%之間;在雙CPU的計算機上(我的)測試呢,沒有啟動QQ2009的情況呢,CPU占用率在0~3%之間,一旦啟動QQ2009,一下子就飚升上去了,在10~25%之間。
我把獲取實時的代碼貼在這里:
===========System.h===========
#pragma once
#ifndef INCLUDE_SYSTEM
#define INCLUDE_SYSTEM
typedef __int64 jlong;
typedef unsigned int juint;
typedef unsigned __int64 julong;
typedef long jint;
typedef signed char jbyte;
#define CONST64(x) (x ## LL)
#define NANOS_PER_SEC CONST64(1000000000)
#define NANOS_PER_MILLISEC 1000000
jlong as_long(LARGE_INTEGER x);
void set_high(jlong* value, jint high);
void set_low(jlong* value, jint low);
class System
{
private:
static jlong frequency;
static int ready;
static void init()
{
LARGE_INTEGER liFrequency = {0};
QueryPerformanceFrequency(&liFrequency);
frequency = as_long(liFrequency);
ready = 1;
}
public:
static jlong nanoTime()
{
if(ready != 1)
init();
LARGE_INTEGER liCounter = {0};
QueryPerformanceCounter(&liCounter);
double current = as_long(liCounter);
double freq = frequency;
return (jlong)((current / freq) * NANOS_PER_SEC);
}
};
#endif
===========System.cpp===========
#include "System.h"
inline void set_low(jlong* value, jint low)
{
*value &= (jlong)0xffffffff << 32;
*value |= (jlong)(julong)(juint)low;
}
inline void set_high(jlong* value, jint high)
{
*value &= (jlong)(julong)(juint)0xffffffff;
*value |= (jlong)high << 32;
}
jlong as_long(LARGE_INTEGER x) {
jlong result = 0; // initialization to avoid warning
set_high(&result, x.HighPart);
set_low(&result, x.LowPart);
return result;
}
LARGE_INTEGER liFrequency = {0};
BOOL gSupportPerformanceFrequency = QueryPerformanceFrequency(&liFrequency);
jlong System::frequency = as_long(liFrequency);
int System::ready = 1;
re: union再探 jacky_zz 2009-07-13 11:30
可以不加&符號的,可能在VC6里需要加。
re: 自己的mp3播放器【帶頻譜】 jacky_zz 2009-07-06 16:50
用事件來控制。
re: 自己的mp3播放器【帶頻譜】 jacky_zz 2009-05-21 09:34
這個版本支持wma的解碼嘛,只不過是使用COM接口的方式。DMO沒有試過,但流程差不多一樣吧,都是獲取PCM格式的數據,然后播放。
re: 自己的mp3播放器【帶頻譜】 jacky_zz 2009-04-30 10:42
沒有安裝libmad,或沒有找到libmad位置
re: 自己的mp3播放器【帶頻譜】 jacky_zz 2009-02-09 08:52
TO audioer,QQ:59502553
在VS2008里編譯,需要有Windows Media Format 9以上的支持。
PS:這個程序很大程度上參考了YoYoPlayer(Java開發),有興趣的話可以參考以下地址:
http://www.blogjava.net/hadeslee/archive/2008/07/29/218161.html
在VS2008里編譯,需要有Windows Media Format 9以上的支持。
PS:這個程序很大程度上參考了YoYoPlayer(Java開發),有興趣的話可以參考以下地址:
http://www.blogjava.net/hadeslee/archive/2008/07/29/218161.html
re: 自己的mp3播放器【帶頻譜】[未登錄] jacky_zz 2009-02-04 13:46
哦,是嗎,你對頻譜處理有獨到的見解?有機會交流一下?
我的QQ:59502553
我的QQ:59502553
re: 自己開發的一個干特圖控件[未登錄] jacky_zz 2008-08-25 12:08
哇,好久沒有看到你冒泡了,呵呵,支持一下。
re: sourceforge又被封了...你們想讓我說什么呢....[未登錄] jacky_zz 2008-07-08 14:21
用FireFox3的插件Galdder就可以訪問
re: 回武漢咯[未登錄] jacky_zz 2008-03-03 17:42
祝答辯順利通過,且好運!
re: Go On[未登錄] jacky_zz 2007-08-15 16:45
么么,太漂亮了!
re: 無題 jacky_zz 2007-07-27 17:42
沒有下載的嗎??
re: 我把初戀搞丟了(原創) jacky_zz 2007-05-15 14:43
在程序員的眼里,代碼就是他最好的女朋友,甚至是老婆。我不知道你是否是這樣想的,但這是我經歷了類似你這樣的兩次得出的結論。我也是轉行到計算機里的,也經過了類似你那樣努力學習技術的過程,只是我一開始學的不是C++,而是Basic,經歷了很多次的計劃才正式的開始學習C++,而此前一直都在和Java打交道。初戀是美好的,但最終都將成為回憶,過去的事情僅供以后生活的參考和借鑒,而不是將來。