Search Results for '유용한 정보'

ATOM Icon

44 POSTS

  1. 2011/07/22 이클립스(Eclipse)와 유틸리티로 셋팅하는 PHP 개발 환경 by 프로그래머
  2. 2010/08/03 나의 개발 머신 애플 맥북 프로에 올라탄 애플리케이션들 by 프로그래머 (3)
  3. 2010/02/15 홈페이지 제작에 도움이 되는 무료 오픈소스 솔루션들 by 프로그래머
  4. 2009/11/21 날씬해지고 스타일리쉬해진 윈도우즈7 잘 쓰겠습니다. by 프로그래머 (4)
  5. 2009/03/29 길어서 타이핑하기 힘든 URL을 짧은 주소로 바꿉시다. by 프로그래머 (3)
  6. 2009/03/14 무엇이든 물어볼 수 있는 국내외 지식 문답 서비스들 by 프로그래머
  7. 2009/03/07 포털에서 운영하는 채용정보 검색, 취업 검색, 취업 센터 by 프로그래머 (3)
  8. 2009/02/07 치트엔진으로 게임 데이타를 고쳐 손쉽게 게임하기 by 프로그래머 (6)
  9. 2009/01/31 네이트온 4.0 베타 미니클럽으로 실시간 토크 나누기. by 프로그래머 (1)
  10. 2009/01/24 지도 보는 즐거움을 안겨준 다음 지도 서비스 개편 by 프로그래머
  11. 2008/12/21 간편한 프리웨어 네트워크 IP 관리 툴 HoverIP by 프로그래머
  12. 2008/12/15 오픈 소스 검색엔진 스핑크스(Sphinx) 레퍼런스 매뉴얼 by 프로그래머
  13. 2008/11/30 포토샵을 대신할 수 있는 무료 이미지 저작툴 김프 by 프로그래머 (7)
  14. 2008/11/26 웹사이트를 통째로 다운로드 할 수 있는 무료 소프트웨어 by 프로그래머 (30)
  15. 2008/11/25 모니터 화면에 보이는 색깔을 추출해주는 유틸리티 by 프로그래머 (5)
  16. 2008/11/23 어제 오픈한 네이버 개발자 센터 오픈소스 프로젝트들. by 프로그래머 (4)
  17. 2008/11/22 훌륭한 무료 소프트웨어로 내 PC를 토핑해볼까? by 프로그래머 (92)
  18. 2008/10/13 위자드팩토리 위젯으로 블로그를 토핑해볼까? by 프로그래머 (1)
  19. 2008/10/12 뉴스 기사 소재, 블로깅 소스를 찾을 수 있는 뉴스와이어 by 프로그래머
  20. 2008/09/19 RSS 피드와 위자드닷컴 마이젯을 이용한 위젯 by 프로그래머
  21. 2008/09/19 웹메일 지존 한메일, 네이버 메일에 1위 내주다. by 프로그래머 (3)
  22. 2008/09/15 새 디자인이 추가된 프로필 버튼과 적용된 블로그들 by 프로그래머 (1)
  23. 2008/09/05 무료로 쓸만한 플래시(SWF)와 FLV 동영상 플레이어 by 프로그래머 (5)
  24. 2008/05/27 텍스트큐브 블로그 리퍼러 기록에 검색어를 노출시켜보자. by 프로그래머 (1)
  25. 2008/05/24 내 블로그에 뜨는 얄미운 번역 스팸 댓글 막아보자! by 프로그래머 (3)
  26. 2007/12/27 항상 위로 보이도록 해주는 유틸리티 Vitrite (Always On Top) by 프로그래머 (1)
  27. 2007/11/30 블로그 주소가 바뀔때 필요한 간단한 자바스크립트 by 프로그래머 (7)
  28. 2007/11/26 수백개의 파일을 수정해야 할때 나의 선택. by 프로그래머 (1)
  29. 2007/11/18 국내 통신사와 IDC 네임서버(DNS) IP. by 프로그래머 (1)
  30. 2007/11/14 사진 크기 조절과 워터마크 삽입을 간편하게.. by 프로그래머 (7)
통합 개발 환경(IDE) 중에 하나인 이클립스(Eclipse)로 PHP 개발을 하기 위해서 아래 열거된 프로그램과 플러그인을 설치해 봅시다. 그리고 이와 함께 XAMPP, cwRsync, PuTTY, TortoiseSVN 등과 같은 툴도 곁들여서 사용할 만합니다.

[Eclipse Download]
http://www.eclipse.org/downloads/

[Plug-in Update]
Help Menu - Install new Software...

[Indigo Update]
http://download.eclipse.org/releases/indigo

[PDT 3.0 Update]
http://download.eclipse.org/tools/pdt/updates/3.0/milestones/

[Subversion Plug-in, Subclipse Update]
http://subclipse.tigris.org/update_1.6.x

모바일 앱 개발에도 관심이 있다면 안드로이드 개발 관련 SDK 와 플러그인도 설치해봅시다.

[Android Plug-in]
https://dl-ssl.google.com/android/eclipse

[Android SDK]
http://developer.android.com/sdk/

Posted by 프로그래머

2011/07/22 10:22 2011/07/22 10:22
, , , , , , , , ,
Response
No Trackback , No Comment
RSS :
http://hompy.info/rss/response/624

주로 윈도우즈 또는 리눅스 시스템을 사용하다가 애플 맥을 주요 기종으로 사용해왔고 그 이후로 제법 시간이 흘렀네요. 주로 Xcode 로 iPhone 애플리케이션 개발을 하다가 다른 기종에서 하던 용무를 점차 맥에서도 하게 되었습니다. 그래서 아래와 같은 애플리케이션들이 제 맥북 프로 위에 쌓이게 되었네요. 아직은 발굴하고 사용해야 할 프로그램들이 많지만 하나 둘 알아가는 재미도 느끼며 차근 차근 쌓아보겠습니다. 큰 기대는 하지 않겠지만 혹시나 추천할만한 프로그램이 있으시면 관련 정보 살짝 흘려 놓고 가셔도 좋겠습니다.^^

[CoRD]
http://cord.sourceforge.net/

[Zwoptex]
http://zwoptexapp.com/

[Tiled map editor]
http://www.mapeditor.org/

[Particle Designer]
http://particledesigner.71squared.com/

[TextMate]
http://macromates.com/

[jEdit]
http://www.jedit.org/

[iTerm]
http://iterm.sourceforge.net/

[FileZilla]
http://filezilla-project.org/

[Transmit]
http://www.panic.com/TRANSMIT/

[iAntivirus]
http://www.iantivirus.com/download/

[CSSEdit]
http://macrabbit.com/

[Disco]
http://www.discoapp.com/

[MacJournal]
http://www.marinersoftware.com/products/macjournal/

[Parallels Desktop]
http://www.parallels.com/

[Adobe Creative Suite]
http://www.adobe.com/products/creativesuite/

[MSN messenger]
http://www.microsoft.com/mac/products/messenger/

[NateOn messenger]
http://nateonweb.nate.com/download/messenger/mac/

[iPhone SDK]
http://developer.apple.com/iphone/

[iTunes]
http://www.apple.com/itunes/

[iWork]
http://www.apple.com/kr/iwork/

[Firefox]
http://www.mozilla.or.kr/ko/

웹프로그래머의 홈페이지 정보 블로그 http://hompy.info/617

Posted by 프로그래머

2010/08/03 18:11 2010/08/03 18:11
, , , , , , , , ,
Response
No Trackback , 3 Comments
RSS :
http://hompy.info/rss/response/617

웹 콘텐츠를 손쉽게 발행하고 관리할 수 있게 도와주는 CMS 솔루션, 홈페이지에 전자상거래를 할 수 있도록 지원해주는 eCommerce 솔루션, 온라인 교육 사이트를 운영하고 관리할 수 있도록 해주는 LMS 솔루션 등을 활용하면 우리가 계획하고 있는 웬만한 홈페이지 들을 제작할 수 있습니다. 아래 열거하고 있는 여러가지 오픈 소스 솔루션들이 바로 그것이며 이를 활용하면 보다 빠르고 손쉽게 홈페이지를 제작할 수 있게 해줄 것입니다. 이에 더해 소스까지 공개 되어 있으므로 관련 솔루션을 분석하거나 개발하고 있는 분들에게도 도움이 되겠네요. 국내외로 많이 활용되고 있는 솔루션들을 CMS, eCommerce, LMS, BLOG 로 분류해놨지만 다양한 솔루션 들이 혼합 되어 있거나 플러그인 형태로 결합 될 수 있는 경우도 있으니 이 처럼 분류하는 것이 무의미 할 수 있습니다. 그래서 직접 설치하고 테스트하고 경험하는 것이 해당 솔루션을 이해하는 가장 좋은 방법이겠습니다.

[CMS]
Drupal
http://drupal.org/

Joomla
http://www.joomla.org/

Plone
http://plone.org/

XpressEngine (ZeroBoard)
http://www.xpressengine.com/

kimsQ
http://dev.kimsq.com/

GnuBoard
http://sir.co.kr/main/gnuboard4/


[eCommerce]
osCommerce
http://www.oscommerce.com/

Magento
http://www.magentocommerce.com/

PrestaShop
http://www.prestashop.com/

ZenCart
http://www.zen-cart.com/

ShoppingOS
http://www.shoppingos.net/

WizMall
http://www.shop-wiz.com/subwizmall.php

TOPs
http://topsmate.net/?pgname=home/home_infoprog


[LMS]
ILIAS
http://www.ilias.de/docu/

Moodle
http://moodle.org/

Sakai
http://sakaiproject.org/

Claroline
http://www.claroline.net/

.LRN (DotLearn)
http://www.dotlrn.org/

ATutor
http://www.atutor.ca/


[BLOG]
WordPress
http://wordpress.org/

TextCube
http://www.textcube.org/

웹프로그래머의 홈페이지 정보 블로그 http://hompy.info/593

Posted by 프로그래머

2010/02/15 15:26 2010/02/15 15:26
, , , , , , , , ,
Response
No Trackback , No Comment
RSS :
http://hompy.info/rss/response/593

얼마 전 한국 마이크로소프트에서 주최한 "대한민국 파워블로거 777명과 함께하는 Windows 7 런칭파티"에서 기념품으로 받은 "윈도우즈7 얼티미트"를 몇일 후 직장에 있는 업무용 컴퓨터에 설치했었습니다. 이전 버전의 윈도우즈 OS 보다 설치하는 과정이 매우 간편했으며 별도의 장치 드라이버를 설치하지 않았는데 문제없이 모든 하드웨어를 인식하였고 전에 사용했던 소프트웨어 들도 별 문제 없이 구동되었으며 체감 속도가 윈도우즈 비스타 처럼 무겁지 않고 UI 비쥬얼은 좋아진 반면 윈도우즈 XP에 버금갈 만큼 가벼웠습니다. 그래서 이젠 윈도우즈 계열 OS를 윈도우즈7로 갈아탈 시기라고 생각했으며 가지고 있던 TG삼보 애버라텍 넷북과 애플 맥북 프로 노트북에도 설치해봤습니다. 넷북은 제조사 드라이버 추가 설치가 필요했고 맥북은 2가지 장치 드라이버 문제가 있지만 큰 문제 없이 잘 작동하고 있으며 최종적으로 집에 있는 데스크탑 PC에도 설치하게 되었고 현재까지 잘 쓰고 있습니다. 몇 주 동안 집에서도 사무실에서도 그동안 사용했던 윈도우즈 XP를 잊어도 될 정도로 문제 없이 잘 사용하고 있으며 이번에 윈도우즈7에 새롭게 추가된 신기능들 또한 애용하며 잘 활용하고 있습니다. 아직 윈도우즈7이나 비스타에 머물러 있는 분들은 윈도우즈7로 갈아타 보시면 좋겠다고 생각하며 이왕에 써보는 것 비용이 드는 문제가 있지만 멀티 터치가 지원되는 PC와 함께 윈도우즈7을 활용한다면 보다더 즐겁고 신선한 경험을 할 수 있을 것 같습니다. 이번에 윈도우즈7이 윈도우즈 계열 OS의 수준을 한 단계 업그레이드 한 만큼 다른 OS 진영에서도 가만이 있지 않을 것이며 OS의 퀄리티를 높이기 위한 업계의 노력이 활발하게 추진될 것으로 예상됩니다. 날씬해지고 스타일리쉬해진 윈도우즈7 기념품 선물 감사하며 잘 쓰겠습니다.
 
웹프로그래머의 홈페이지 정보 블로그 http://hompy.info/590

Posted by 프로그래머

2009/11/21 23:29 2009/11/21 23:29

최근 트위터와 같은 마이크로 블로그의 성장으로 "Free Short URL Redirection Service" 서비스의 활용 빈도가 높아지고 있습니다. 개인적으로 짧은 도메인을 하나 보유하고 있어서 주말을 이용해서 심플한 URL 단축 서비스 홈페이지를 만들어 오픈해봅니다. 급조한 홈페이지라서 아쉬운 부분이 다소간 있지만 차근 차근 업그레이드 해봐야겠습니다.^^

URL 단축 서비스의 이용방법은 간단합니다. 입력창에 단축을 원하는 긴 URL 주소를 입력하고 [ENTER] 키를 누르거나 [SHORT URL] 버튼을 클릭하면 자동으로 "http://cug.kr/1a5z4d" 과 같은 짧은 주소가 생성 됩니다. 이렇게 생성된 주소를 복사해서 IE나 파이어폭스와 같은 인터넷 브라우져의 주소창에 입력해 보면 원래의 주소로 Redirection 되는 것을 확인할 수 있습니다. 확인이 되면 이것을 원하는 곳에서 바로가기 링크로 쓰면 되겠습니다.

해외에서는 TinyURL.com 이 많이 활용되고 있는 것으로 보이며 국내외로 여러 곳에서 URL 단축 서비스를 제공하고 있더군요. 이들의 기본 기능은 비슷하지만 보다 손쉽게 사용할 수 있도록 오픈API나 플러그인을 제공하는 곳도 있습니다. 필요하다면 자신의 구미에 맞는 서비스를 찾아서 활용하는 것도 좋겠습니다.

[Long URL]
http://local.daum.net/map/index.jsp?cx=505252&cy=1111724&level=3&panoid=236824&pan=166.72824563771616&tilt=-4.344929946408747&map_type=TYPE_MAP&map_hybrid=false&map_attribute=ROADVIEW&screenMode=normal

위에 있는 샘플 처럼 외우기도 힘든 긴 주소를 간단한 주소로 바꿔보고 싶은 분들은
http://cug.kr/
을 써보세요.

[Short URL]
http://cug.kr/2

그리고 이용에 불편 사항이 있는 분들은 댓글 남겨주시면 되도록 반영 하도록 애써보겠습니다.
또한 블로그나 홈페이지 도메인으로 사용할 수 있는 아래 예제와 같은 cug.kr 의 서브 도메인을 무료로 부여해드릴 수 있으므로 도메인 등록에 필요한 정보와 함께 비밀 댓글로 연락주세요.

[Sample Domain]
http://office.cug.kr
http://design.cug.kr

웹프로그래머의 트위터 http://twitter.com/refer
웹프로그래머의 홈페이지 정보 블로그 http://hompy.info/575

Posted by 프로그래머

2009/03/29 15:51 2009/03/29 15:51

무엇이든 물어볼 수 있는 지식 문답(Question & Answer) 서비스를 자주 이용하시나요? 지식을 다루고 이를 공유하는 방법 중에 하나인 지식 문답 서비스는 오프라인에서는 불가능한 광범위한 분야에 대한 질문을 비교적 손쉽게 할 수 있으며 답변은 비교적 짧은 시간 내에 받을 수 있습니다. 이런 형식의 서비스가 갈수록 진화 되어 실시간으로 답변을 받을 수 있는 서비스도 있으며 약간의 비용을 지불하면 전문가의 해답을 얻을 수 있기도 하며 답변을 해주면 수익을 얻게 해주는 서비스도 있고 편의를 위해 문자로 질문하거나 문자로 답변을 받아 볼 수 있는 편리한 서비스도 있습니다. 분야를 특정지어 서비스 하는 경우도 있으며 학술적인 질문을 주요하게 받는 곳도 있고 고민에 대해 상담받고 충고하는 곳도 있고 관계 테투리 내에서 문답하는 경우도 있습니다. 국내에서는 네이버의 지식인이 해외에서는 야후 앤서(Yahoo Answers)가 인기 있는 지식 문답 서비스로 자리잡은 것 같습니다.

[국내외 지식 문답 서비스들]
http://ask.nate.com/
http://k.daum.net/
http://kin.naver.com/
http://kr.ks.yahoo.com/
http://ksea.paran.com/
http://ksearch.d.paran.com/
http://answers.google.com/
http://answers.yahoo.com/
http://ask.metafilter.com/
http://askville.amazon.com/
http://qna.live.com/
http://qna.rediff.com/
http://wiki.answers.com/
http://wis.dm/
http://www.able2know.org/
http://www.advicenators.com/
http://www.akatoo.com/
http://www.allexperts.com/
http://www.ammas.com/
http://www.answerbag.com/
http://www.answerly.com/
http://www.answerology.com/
http://www.answers.com/
http://www.answerway.com/
http://www.askanything.com/
http://www.askbar.com/
http://www.askmehelpdesk.com/
http://www.askpedia.com/
http://www.chacha.com/
http://www.dizzay.com/
http://www.expertbee.com/
http://www.experts-exchange.com/
http://www.fluther.com/
http://www.funadvice.com/
http://www.grupthink.com/
http://www.helpglobe.com/
http://www.hiogi.com/
http://www.justanswer.com/
http://www.knowbrainers.com/
http://www.linkedin.com/answers
http://www.liveperson.com/
http://www.mahalo.com/answers
http://www.minti.com/questions-and-answers
http://www.mosio.com/
http://www.mturk.com/
http://www.mylot.com/
http://www.oyogi.com/
http://www.pointask.com/
http://www.stackoverflow.com/
http://www.simplyexplained.com/
http://www.theanswerbank.co.uk/
http://www.trulia.com/voices/
http://www.uclue.com/
http://www.wispoon.com/
http://www.wondir.com/
http://www.yedda.com/
http://zhidao.baidu.com/

웹프로그래머의 홈페이지 정보 블로그 http://hompy.info/573

Posted by 프로그래머

2009/03/14 15:33 2009/03/14 15:33

얼어 붙은 경제 상황으로 부쩍 줄어드는 일자리에 대해 우려의 목소리가 높아지는 즈음에 직장에서 SNS 기반 채용정보 홈페이지도 오픈하게 되었습니다. 그래서 문득 포털에서는 채용 정보를 어떻게 관심을 가지고 다루고 있는 지 궁금해서 조금 찾아봤습니다. 네이트와 드림위즈는 채용정보 검색 서비스를 제공하고 있었고 다음, 야후, 파란 등에서는 "채용" 키워드로 검색 해보면 검색결과 페이지 내에 별도의 채용정보 검색창이 있어서 제휴되어 있는 취업 사이트로 연결되고 있었습니다. 단독 취업섹션을 운영하고 있는 다음, 파란, 천리안은 제휴된 특정 취업 사이트 정보를 제공하고 있었습니다. 그런데 네이버나 구글의 경우 별도의 채용정보 관련 검색이나 섹션이 없는 것 같습니다. 이에 대해 직접 알아보고 싶은 분은 아래 정리된 링크들을 이용하면 확인할 수 있습니다.

[네이트 채용 검색]
〓▷ 개별 채용 사이트로 연결
http://search.nate.com/search/job.html?q=%C3%A4%BF%EB

[드림위즈 취업 검색]
〓▷ 개별 채용 사이트로 연결
http://search.d.paran.com/sbs/job/jobinfo/index.php?Query=%C3%A4%BF%EB

[다음 채용 정보 검색창]
〓▷ 커리어 검색으로 연결
http://search.daum.net/search?q=%C3%A4%BF%EB

[야후 채용 정보 검색창]
〓▷ 인크루트 내일검색으로 연결
http://kr.search.yahoo.com/search?p=%C3%A4%BF%EB

[파란 취업 정보]
〓▷ 인크루트 내일검색으로 연결
http://search.paran.com/search/index.php?Query=%C3%A4%BF%EB

[하나포스 취업 테마검색]
〓▷ 야후 검색 페이지, 인크루트 내일검색으로 연결
http://kr.hanafos.search.yahoo.com/search/hanafos/combo?p=%EC%B1%84%EC%9A%A9

[다음 취업 센터]
〓▷ 커리어 정보 제공
http://job.daum.net/

[파란 취업]
〓▷ 인크루트 정보 제공
http://job.paran.com/

[천리안 취업]
〓▷ 스카웃 채용 정보 제공
http://job.chol.com/

웹프로그래머의 홈페이지 정보 블로그 http://hompy.info/571

Posted by 프로그래머

2009/03/07 12:29 2009/03/07 12:29
, , , , , , , , ,
Response
No Trackback , 3 Comments
RSS :
http://hompy.info/rss/response/571

실행중인 프로그램의 메모리 데이터를 수정할 수 있는 오픈 소스 유틸리티인 치트엔진(Cheat Engine)을 설치했고 심심풀이 게임으로 적합한 AntBuster 개미박멸 게임을 실행한 후 케이크를 가져가지 못하도록 치트엔진으로 게임 데이터를 수정하는 과정을 동영상으로 담았습니다. 게임 데이타 수정은 가끔 재미삼아 해볼 수 있겠지만 과하면 게임의 재미를 떨어뜨릴 수 있으므로 적당히 하는 것이 좋겠네요.

[Cheat Engine 홈페이지]
http://www.cheatengine.org/

[AntBuster 플래시 게임]
http://www.hompydesign.com/game/antbuster.php

[치트엔진 테스트 동영상]

웹프로그래머의 홈페이지 정보 블로그 http://hompy.info/568

Posted by 프로그래머

2009/02/07 23:27 2009/02/07 23:27

네이트온 4.0 베타를 이용하면 네이트온에서 미니클럽으로 등록된 클럽(카페)을 웹브라우져가 아닌 메신져로 간편하게 이용할 수 있으며 최근에 추가된 프리톡을 실시간으로 이용할 수 있습니다. 프리톡은 채팅 처럼 댓글이 실시간으로 올라오며 게시판과 다르게 200자의 짧은 글을 주고 받을 수 있어 부담 없이 가벼운 글로 토그를 주고 받을 수 있습니다. 네이트온 4.0 베타를 설치하면 누구나 미니클럽을 개설할 수 있고 기존에 개설된 미니 클럽에 가입하면 메신져 접속자 리스트 하단에 미니클럽 폴더가 보이게 되며 여기서 관리 버튼을 클릭해서 미니클럽의 실시간 정보를 확인할 수 있도록 연동할 미니클럽 리스트에 추가할 수 있습니다. 연동 리스트에 추가 되면 새글이나 댓글이 올라올 때 윈도우즈 오른쪽 하단에 알림창이 뜨게 되며 알림창을 클릭하면 방금 올라온 따끈따끈한 새글을 즉시 확인할 수 있게 됩니다. 이에 미니클럽을 처음 접한 회원들은 대체적으로 신기하다는 반응을 보이고 있습니다. 그런데 실시간으로 올라오는 댓글 때문에 지나치게 빠져들 수 있고 때로운 업무에 지장을 초래할 수 있으므로 중독되지 않도록 적당히 이용해야 되겠으며 특정 분야를 정해서 실시간으로 정보를 교환하는 도구로 활용하면 매우 유익할 것으로 보입니다.

[네이트온 베타 4.0 프리톡]
사용자 삽입 이미지

[네이트온 4.0 미니클럽 폴더]
사용자 삽입 이미지

[운영중인 미니클럽]
http://club.cyworld.com/officezone
http://club.cyworld.com/flashzone

웹프로그래머의 홈페이지 정보 블로그 http://hompy.info/567

Posted by 프로그래머

2009/01/31 10:42 2009/01/31 10:42
, , , , , , , , ,
Response
No Trackback , a comment
RSS :
http://hompy.info/rss/response/567

요즘 어느 포털의 지도 API를 이용해서 제가 관리하는 홈페이지에 지도 보기 기능을 추가할까 관심을 가지고 있던 차에 다음 지도 서비스가 개편 되었고 그 결과는 너무 좋아서 문제라는 뉴스 기사가 보일 정도로 지도 서비스 수준을 많이 높였습니다. 아래 이미지를 클릭하시면 제가 선택한 위치를 다음 지도 로드뷰로 볼 수 있습니다. 직접 테스트 하시거나 아래 동영상을 보시면 로드뷰라는 것이 있는데 현장에서 볼 수 있는 주변 거리 상황을 360도 회전하거나 전후진 해가며 살펴볼 수 있어 가보지 않은 곳도 마치 가본 것처럼 미리 체험할 수 있습니다. 아직은 큰 도로 정도의 로드뷰만 볼 수 있지만 차후에는 좀더 많은 길들을 로드뷰로 볼 수 있지 않을까 기대가 되며 약속 장소, 모임 장소, 면접 장소, 기업의 약도 등으로 현장감 있는 로드뷰 주소를 활용하면 좀더 쉽게 길을 찾아 갈 수 있을 것으로 보이며 많이 활용될 것으로 전망됩니다. 다른 포털들도 이에 뒤지지 않기 위해 조만간 수준 높은 서비스를 내 놓을 것이며 경쟁을 통해 국내 지도 서비스들이 끊임 없는 진화를 하게 되겠네요. 아직은 오픈API를 통해 로드뷰를 매시업 할 수 없는 것으로 보이지만 곧 서비스 되기를 바래 봅니다.




웹프로그래머의 홈페이지 정보 블로그 http://hompy.info/564

Posted by 프로그래머

2009/01/24 15:24 2009/01/24 15:24
, , , , , , , , ,
Response
2 Trackbacks , No Comment
RSS :
http://hompy.info/rss/response/564

윈도우즈 환경 하에 명령 프로프트에서 네트워크 관련 명령으로 IP 를 조회하고 관리했었다면, HoverIP 라는 툴을 사용하면 일일이 명령어를 타이핑할 필요 없이 간편하게 쓸 수 있어 편리하게 사용할 수 있습니다. 프로그램은 총 6개의 탭 메뉴로 구성되어 있으며 "IP Config" 탭은 내 컴퓨터의 랜카드,IP 관련 정보를 조회할 수 있고 "Nslookup" 탭에선 도메인을 입력하면 해당 도메인의 IP 주소를 찾아주며 "Routing table" 탭으로는 네트워크 관련 전문 지식이 있는 경우 라우팅 테이블을 갱신할 수 있습니다. 그리고 "Ping" 탭으로는 원격지에 있는 컴퓨터와의 네트워크 속도를 체크할 수 있으며 "Traceroute" 탭에선 원격지에 있는 컴퓨터로 패킷이 이동하는 경로와 속도를 확인할 수 있습니다. 또한 "Port scanning" 탭으로는 원격지 컴퓨터와 통신 가능한 포트 번호를 출력 해주며 어떤 서비스에 접근 가능한지 알 수 있게 해줍니다. 아래 이미지들은 해당 탭에서 관련 정보를 조회하고 있는 화면을 캡쳐한 것입니다.

HoverIP - 네트워크 IP 관리 툴
   http://www.hoverdesk.net/freeware.htm
   http://www.hoverdesk.net/dl/en/HoverIP.zip







웹프로그래머의 홈페이지 정보 블로그 http://hompy.info

Posted by 프로그래머

2008/12/21 17:44 2008/12/21 17:44

오픈 소스 검색엔진 스핑크스(Sphinx) 0.9.9 레퍼런스 매뉴얼입니다. 몇가지 테스트를 해보니 괜찮은 것 같습니다. 시간 내서 내 블로그 검색 기능으로 활용해보려고 합니다. 한글화 된 문서가 없던데 차후에 메뉴얼을 번역 해보거나 스핑크스 활용법에 대한 자료를 만들어 볼 수 도 있을 것 같네요.

Sphinx 0.9.9 reference manual

1. Introduction

1.1. About

Sphinx is a full-text search engine, distributed under GPL version 2. Commercial licensing (eg. for embedded use) is also available upon request.

Generally, it's a standalone search engine, meant to provide fast, size-efficient and relevant full-text search functions to other applications. Sphinx was specially designed to integrate well with SQL databases and scripting languages.

Currently built-in data source drivers support fetching data either via direct connection to MySQL, or PostgreSQL, or from a pipe in a custom XML format. Adding new drivers (eg. to natively support some other DBMSes) is designed to be as easy as possible.

Search API is natively ported to PHP, Python, Perl, Ruby, Java, and also available as a pluggable MySQL storage engine. API is very lightweight so porting it to new language is known to take a few hours.

As for the name, Sphinx is an acronym which is officially decoded as SQL Phrase Index. Yes, I know about CMU's Sphinx project.

1.2. Sphinx features


  • high indexing speed (upto 10 MB/sec on modern CPUs);
  • high search speed (avg query is under 0.1 sec on 2-4 GB text collections);
  • high scalability (upto 100 GB of text, upto 100 M documents on a single CPU);
  • provides good relevance ranking through combination of phrase proximity ranking and statistical (BM25) ranking;
  • provides distributed searching capabilities;
  • provides document exceprts generation;
  • provides searching from within MySQL through pluggable storage engine;
  • supports boolean, phrase, and word proximity queries;
  • supports multiple full-text fields per document (upto 32 by default);
  • supports multiple additional attributes per document (ie. groups, timestamps, etc);
  • supports stopwords;
  • supports both single-byte encodings and UTF-8;
  • supports English stemming, Russian stemming, and Soundex for morphology;
  • supports MySQL natively (MyISAM and InnoDB tables are both supported);
  • supports PostgreSQL natively.

1.3. Where to get Sphinx

Sphinx is available through its official Web site at http://www.sphinxsearch.com/.

Currently, Sphinx distribution tarball includes the following software:

  • indexer: an utility which creates fulltext indexes;
  • search: a simple command-line (CLI) test utility which searches through fulltext indexes;
  • searchd: a daemon which enables external software (eg. Web applications) to search through fulltext indexes;
  • sphinxapi: a set of searchd client API libraries for popular Web scripting languages (PHP, Python, Perl, Ruby).
  • spelldump: a simple command-line tool to extract the items from an ispell dictionary to help customize your index, for use with wordforms.

1.4. License

This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. See COPYING file for details.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA

If you don't want to be bound by GNU GPL terms (for instance, if you would like to embed Sphinx in your software, but would not like to disclose its source code), please contact the author to obtain a commercial license.

1.5. Author and contributors

Author

Sphinx initial author and current primary developer is:


Contributors

People who contributed to Sphinx and their contributions (in no particular order) are:

  • Robert "coredev" Bengtsson (Sweden), initial version of PostgreSQL data source;
  • Len Kranendonk, Perl API
  • Dmytro Shteflyuk, Ruby API

Many other people have contributed ideas, bug reports, fixes, etc. Thank you!

1.6. History

Sphinx development was started back in 2001, because I didn't manage to find an acceptable search solution (for a database driven Web site) which would meet my requirements. Actually, each and every important aspect was a problem:

  • search quality (ie. good relevance)
    • statistical ranking methods performed rather bad, especially on large collections of small documents (forums, blogs, etc)
  • search speed
    • especially if searching for phrases which contain stopwords, as in "to be or not to be"
  • moderate disk and CPU requirements when indexing
    • important in shared hosting enivronment, not to mention the indexing speed.

Despite the amount of time passed and numerous improvements made in the other solutions, there's still no solution which I personally would be eager to migrate to.

Considering that and a lot of positive feedback received from Sphinx users during last years, the obvious decision is to continue developing Sphinx (and, eventually, to take over the world).

2. Installation

2.1. Supported systems

Most modern UNIX systems with a C++ compiler should be able to compile and run Sphinx without any modifications.

Currently known systems Sphinx has been successfully running on are:

  • Linux 2.4.x, 2.6.x (various distributions)
  • Windows 2000, XP
  • FreeBSD 4.x, 5.x, 6.x
  • NetBSD 1.6, 3.0
  • Solaris 9, 11
  • Mac OS X

CPU architectures known to work include X86, X86-64, SPARC64.

I hope Sphinx will work on other Unix platforms as well. If the platform you run Sphinx on is not in this list, please do report it.

At the moment, Windows version of Sphinx is not intended to be used in production, but rather for testing and debugging only. Two most prominent issues are missing concurrent queries support (client queries are stacked on TCP connection level instead), and missing index data rotation support. There are succesful production installations which workaround these issues. However, running high-volume search service under Windows is still not recommended.

2.2. Required tools

On UNIX, you will need the following tools to build and install Sphinx:

  • a working C++ compiler. GNU gcc is known to work.
  • a good make program. GNU make is known to work.

On Windows, you will need Microsoft Visual C/C++ Studio .NET 2003 or 2005. Other compilers/environments will probably work as well, but for the time being, you will have to build makefile (or other environment specific project files) manually.

2.3. Installing Sphinx on Linux

  1. Extract everything from the distribution tarball (haven't you already?) and go to the sphinx subdirectory:

    $ tar xzvf sphinx-0.9.8.tar.gz
    $ cd sphinx
     

  2. Run the configuration program:

    There's a number of options to configure. The complete listing may be obtained by using --help switch. The most important ones are:

    • --prefix, which specifies where to install Sphinx; such as --prefix=/usr/local/sphinx (all of the examples use this prefix)
    • --with-mysql, which specifies where to look for MySQL include and library files, if auto-detection fails;
    • --with-pgsql, which specifies where to look for PostgreSQL include and library files.

    $ ./configure

  3. Build the binaries:

    $ make

  4. Install the binaries in the directory of your choice: (defaults to /usr/local/bin/ on *nix systems, but is overridden with configure --prefix)

    $ make install

2.4. Installing Sphinx on Windows

Installing Sphinx on a Windows server is often easier than installing on a Linux environment; unless you are preparing code patches, you can use the pre-compiled binary files from the Downloads area on the website.

  1. Extract everything from the .zip file you have downloaded - sphinx-0.9.8-win32.zip (or sphinx-0.9.8-win32-pgsql.zip if you need PostgresSQL support as well.) You can use Windows Explorer in Windows XP and up to extract the files, or a freeware package like 7Zip to open the archive.

    For the remainder of this guide, we will assume that the folders are unzipped into C:\Sphinx, such that searchd.exe can be found in C:\Sphinx\bin\searchd.exe. If you decide to use any different location for the folders or configuration file, please change it accordingly.

  2. Install the searchd system as a Windows service:

    C:\Sphinx> C:\Sphinx\searchd --install --config C:\Sphinx\sphinx.conf --servicename SphinxSearch

  3. The searchd service will now be listed in the Services panel within the Management Console, available from Administrative Tools. It will not have been started, as you will need to configure it and build your indexes with indexer before starting the service. A guide to do this can be found under Quick tour.

2.5. Known installation issues

If configure fails to locate MySQL headers and/or libraries, try checking for and installing mysql-devel package. On some systems, it is not installed by default.

If make fails with a message which look like

/bin/sh: g++: command not found
make[1]: *** [libsphinx_a-sphinx.o] Error 127

try checking for and installing gcc-c++ package.

If you are getting compile-time errors which look like

sphinx.cpp:67: error: invalid application of `sizeof' to
    incomplete type `Private::SizeError<false>'

this means that some compile-time type size check failed. The most probable reason is that off_t type is less than 64-bit on your system. As a quick hack, you can edit sphinx.h and replace off_t with DWORD in a typedef for SphOffset_t, but note that this will prohibit you from using full-text indexes larger than 2 GB. Even if the hack helps, please report such issues, providing the exact error message and compiler/OS details, so I could properly fix them in next releases.

If you keep getting any other error, or the suggestions above do not seem to help you, please don't hesitate to contact me.

2.6. Quick Sphinx usage tour

All the example commands below assume that you installed Sphinx in /usr/local/sphinx, so searchd can be found in /usr/local/sphinx/bin/searchd.

To use Sphinx, you will need to:

  1. Create a configuration file.

    Default configuration file name is sphinx.conf. All Sphinx programs look for this file in current working directory by default.

    Sample configuration file, sphinx.conf.dist, which has all the options documented, is created by configure. Copy and edit that sample file to make your own configuration: (assuming Sphinx is installed into /usr/local/sphinx/)


    Sample configuration file is setup to index documents table from MySQL database test; so there's example.sql sample data file to populate that table with a few documents for testing purposes:


    $ mysql -u test < /usr/local/sphinx/etc/example.sql

    $ cd /usr/local/sphinx/etc
    $ cp sphinx.conf.dist sphinx.conf
    $ vi sphinx.conf

  2. Run the indexer to create full-text index from your data:

    $ cd /usr/local/sphinx/etc
    $ /usr/local/sphinx/bin/indexer

  3. Query your newly created index!

To query the index from command line, use search utility:


To query the index from your PHP scripts, you need to:

  1. Run the search daemon which your script will talk to:

    $ cd /usr/local/sphinx/etc
    $ /usr/local/sphinx/bin/searchd

  2. Run the attached PHP API test script (to ensure that the daemon was succesfully started and is ready to serve the queries):

    $ cd sphinx/api
    $ php test.php test

  3. Include the API (it's located in api/sphinxapi.php) into your own scripts and use it.

Happy searching!

$ cd /usr/local/sphinx/etc
$ /usr/local/sphinx/bin/search test

3. Indexing

3.1. Data sources

The data to be indexed can generally come from very different sources: SQL databases, plain text files, HTML files, mailboxes, and so on. From Sphinx point of view, the data it indexes is a set of structured documents, each of which has the same set of fields. This is biased towards SQL, where each row correspond to a document, and each column to a field.

Depending on what source Sphinx should get the data from, different code is required to fetch the data and prepare it for indexing. This code is called data source driver (or simply driver or data source for brevity).

At the time of this writing, there are drivers for MySQL and PostgreSQL databases, which can connect to the database using its native C/C++ API, run queries and fetch the data. There's also a driver called xmlpipe, which runs a specified command and reads the data from its stdout. See Section 3.8, “xmlpipe data source” section for the format description.

There can be as many sources per index as necessary. They will be sequentially processed in the very same order which was specifed in index definition. All the documents coming from those sources will be merged as if they were coming from a single source.

3.2. Attributes

Attributes are additional values associated with each document that can be used to perform additional filtering and sorting during search.

It is often desired to additionally process full-text search results based not only on matching document ID and its rank, but on a number of other per-document values as well. For instance, one might need to sort news search results by date and then relevance, or search through products within specified price range, or limit blog search to posts made by selected users, or group results by month. To do that efficiently, Sphinx allows to attach a number of additional attributes to each document, and store their values in the full-text index. It's then possible to use stored values to filter, sort, or group full-text matches.

Attributes, unlike the fields, are not full-text indexed. They are stored in the index, but it is not possible to search them as full-text, and attempting to do so results in an error.

For example, it is impossible to use the extended matching mode expression @column 1 to match documents where column is 1, if column is an attribute, and this is still true even if the numeric digits are normally indexed.

Attributes can be used for filtering, though, to restrict returned rows, as well as sorting or result grouping; it is entirely possible to sort results purely based on attributes, and ignore the search relevance tools. Additionally, attributes are returned from the search daemon, while the indexed text is not.

A good example for attributes would be a forum posts table. Assume that only title and content fields need to be full-text searchable - but that sometimes it is also required to limit search to a certain author or a sub-forum (ie. search only those rows that have some specific values of author_id or forum_id columns in the SQL table); or to sort matches by post_date column; or to group matching posts by month of the post_date and calculate per-group match counts.

This can be achieved by specifying all the mentioned columns (excluding title and content, that are full-text fields) as attributes, indexing them, and then using API calls to setup filtering, sorting, and grouping. Here as an example.

Example sphinx.conf part:


...
sql_query = SELECT id, title, content, \
	author_id, forum_id, post_date FROM my_forum_posts
sql_attr_uint = author_id
sql_attr_uint = forum_id
sql_attr_timestamp = post_date
...

Example application code (in PHP):


// only search posts by author whose ID is 123
$cl->SetFilter ( "author_id", array ( 123 ) );

// only search posts in sub-forums 1, 3 and 7
$cl->SetFilter ( "forum_id", array ( 1,3,7 ) );

// sort found posts by posting date in descending order
$cl->SetSortMode ( SPH_SORT_ATTR_DESC, "post_date" );

Attributes are named. Attribute names are case insensitive. Attributes are not full-text indexed; they are stored in the index as is. Currently supported attribute types are:

  • unsigned integers (1-bit to 32-bit wide);
  • UNIX timestamps;
  • floating point values (32-bit, IEEE 754 single precision);
  • string ordinals (specially computed integers);
  • MVA, multi-value attributes (variable-length lists of 32-bit unsigned integers).

The complete set of per-document attribute values is sometimes referred to as docinfo. Docinfos can either be

  • stored separately from the main full-text index data ("extern" storage, in .spa file), or
  • attached to each occurence of document ID in full-text index data ("inline" storage, in .spd file).

When using extern storage, a copy of .spa file (with all the attribute values for all the documents) is kept in RAM by searchd at all times. This is for performance reasons; random disk I/O would be too slow. On the contrary, inline storage does not require any additional RAM at all, but that comes at the cost of greatly inflating the index size: remember that it copies all attribute value every time when the document ID is mentioned, and that is exactly as many times as there are different keywords in the document. Inline may be the only viable option if you have only a few attributes and need to work with big datasets in limited RAM. However, in most cases extern storage makes both indexing and searching much more efficient.

Search-time memory requirements for extern storage are (1+number_of_attrs)*number_of_docs*4 bytes, ie. 10 million docs with 2 groups and 1 timestamp will take (1+2+1)*10M*4 = 160 MB of RAM. This is PER DAEMON, not per query. searchd will allocate 160 MB on startup, read the data and keep it shared between queries. The children will NOT allocate any additional copies of this data.

3.3. MVA (multi-valued attributes)

MVAs, or multi-valued attributes, are an important special type of per-document attributes in Sphinx. MVAs make it possible to attach lists of values to every document. They are useful for article tags, product categories, etc. Filtering and group-by (but not sorting) on MVA attributes is supported.

Currently, MVA list entries are limited to unsigned 32-bit integers. The list length is not limited, you can have an arbitrary number of values attached to each document as long as RAM permits (.spm file that contains the MVA values will be precached in RAM by searchd). The source data can be taken either from a separate query, or from a document field; see source type in sql_attr_multi. In the first case the query will have to return pairs of document ID and MVA values, in the second one the field will be parsed for integer values. There are absolutely no requirements as to incoming data order; the values will be automatically grouped by document ID (and internally sorted within the same ID) during indexing anyway.

When filtering, a document will match the filter on MVA attribute if any of the values satisfy the filtering condition. (Therefore, documents that pass through exclude filters will not contain any of the forbidden values.) When grouping by MVA attribute, a document will contribute to as many groups as there are different MVA values associated with that document. For instance, if the collection contains exactly 1 document having a 'tag' MVA with values 5, 7, and 11, grouping on 'tag' will produce 3 groups with '@count' equal to 1 and '@groupby' key values of 5, 7, and 11 respectively. Also note that grouping by MVA might lead to duplicate documents in the result set: because each document can participate in many groups, it can be chosen as the best one in in more than one group, leading to duplicate IDs. PHP API historically uses ordered hash on the document ID for the resulting rows; so you'll also need to use SetArrayResult() in order to employ group-by on MVA with PHP API.

3.4. Indexes

To be able to answer full-text search queries fast, Sphinx needs to build a special data structure optimized for such queries from your text data. This structure is called index; and the process of building index from text is called indexing.

Different index types are well suited for different tasks. For example, a disk-based tree-based index would be easy to update (ie. insert new documents to existing index), but rather slow to search. Therefore, Sphinx architecture allows for different index types to be implemented easily.

The only index type which is implemented in Sphinx at the moment is designed for maximum indexing and searching speed. This comes at a cost of updates being really slow; theoretically, it might be slower to update this type of index than than to reindex it from scratch. However, this very frequently could be worked around with muiltiple indexes, see Section 3.10, “Live index updates” for details.

It is planned to implement more index types, including the type which would be updateable in real time.

There can be as many indexes per configuration file as necessary. indexer utility can reindex either all of them (if --all option is specified), or a certain explicitly specified subset. searchd utility will serve all the specified indexes, and the clients can specify what indexes to search in run time.

3.5. Restrictions on the source data

There are a few different restrictions imposed on the source data which is going to be indexed by Sphinx, of which the single most important one is:

ALL DOCUMENT IDS MUST BE UNIQUE UNSIGNED NON-ZERO INTEGER NUMBERS (32-BIT OR 64-BIT, DEPENDING ON BUILD TIME SETTINGS).

If this requirement is not met, different bad things can happen. For instance, Sphinx can crash with an internal assertion while indexing; or produce strange results when searching due to conflicting IDs. Also, a 1000-pound gorilla might eventually come out of your display and start throwing barrels at you. You've been warned.

3.6. Charsets, case folding, and translation tables

When indexing some index, Sphinx fetches documents from the specified sources, splits the text into words, and does case folding so that "Abc", "ABC" and "abc" would be treated as the same word (or, to be pedantic, term).

To do that properly, Sphinx needs to know

  • what encoding is the source text in;
  • what characters are letters and what are not;
  • what letters should be folded to what letters.

This should be configured on a per-index basis using charset_type and charset_table options. charset_type specifies whether the document encoding is single-byte (SBCS) or UTF-8. charset_table specifies the table that maps letter characters to their case folded versions. The characters that are not in the table are considered to be non-letters and will be treated as word separators when indexing or searching through this index.

Note that while default tables do not include space character (ASCII code 0x20, Unicode U+0020) as a letter, it's in fact perfectly legal to do so. This can be useful, for instance, for indexing tag clouds, so that space-separated word sets would index as a single search query term.

Default tables currently include English and Russian characters. Please do submit your tables for other languages!

3.7. SQL data sources (MySQL, PostgreSQL)

With all the SQL drivers, indexing generally works as follows.

  • connection to the database is established;
  • pre-query (see Section 9.1.9, “sql_query_pre”) is executed to perform any necessary initial setup, such as setting per-connection encoding with MySQL;
  • main query (see Section 9.1.10, “sql_query”) is executed and the rows it returns are indexed;
  • post-query (see Section 9.1.21, “sql_query_post”) is executed to perform any necessary cleanup;
  • connection to the database is closed;
  • indexer does the sorting phase (to be pedantic, index-type specific post-processing);
  • connection to the database is established again;
  • post-index query (see Section 9.1.22, “sql_query_post_index”) is executed to perform any necessary final cleanup;
  • connection to the database is closed again.

Most options, such as database user/host/password, are straightforward. However, there are a few subtle things, which are discussed in more detail here.

Ranged queries

Main query, which needs to fetch all the documents, can impose a read lock on the whole table and stall the concurrent queries (eg. INSERTs to MyISAM table), waste a lot of memory for result set, etc. To avoid this, Sphinx supports so-called ranged queries. With ranged queries, Sphinx first fetches min and max document IDs from the table, and then substitutes different ID intervals into main query text and runs the modified query to fetch another chunk of documents. Here's an example.

Example 1. Ranged query usage example

# in sphinx.conf

sql_query_range	= SELECT MIN(id),MAX(id) FROM documents
sql_range_step = 1000
sql_query = SELECT * FROM documents WHERE id>=$start AND id<=$end

If the table contains document IDs from 1 to, say, 2345, then sql_query would be run three times:

  1. with $start replaced with 1 and $end replaced with 1000;
  2. with $start replaced with 1001 and $end replaced with 2000;
  3. with $start replaced with 2000 and $end replaced with 2345.

Obviously, that's not much of a difference for 2000-row table, but when it comes to indexing 10-million-row MyISAM table, ranged queries might be of some help.

sql_post vs. sql_post_index

The difference between post-query and post-index query is in that post-query is run immediately when Sphinx received all the documents, but further indexing may still fail for some other reason. On the contrary, by the time the post-index query gets executed, it is guaranteed that the indexing was succesful. Database connection is dropped and re-established because sorting phase can be very lengthy and would just timeout otherwise.

3.8. xmlpipe data source

xmlpipe data source was designed to enable users to plug data into Sphinx without having to implement new data sources drivers themselves. It is limited to 2 fixed fields and 2 fixed attributes, and is deprecated in favor of Section 3.9, “xmlpipe2 data source” now. For new streams, use xmlpipe2.

To use xmlpipe, configure the data source in your configuration file as follows:

source example_xmlpipe_source
{
    type = xmlpipe
    xmlpipe_command = perl /www/mysite.com/bin/sphinxpipe.pl
}

The indexer will run the command specified in xmlpipe_command, and then read, parse and index the data it prints to stdout. More formally, it opens a pipe to given command and then reads from that pipe.

indexer will expect one or more documents in custom XML format. Here's the example document stream, consisting of two documents:

Example 2. XMLpipe document stream

<document>
<id>123</id>
<group>45</group>
<timestamp>1132223498</timestamp>
<title>test title</title>
<body>
this is my document body
</body>
</document>

<document>
<id>124</id>
<group>46</group>
<timestamp>1132223498</timestamp>
<title>another test</title>
<body>
this is another document
</body>
</document>

class=example-break>

Legacy xmlpipe legacy driver uses a builtin parser which is pretty fast but really strict and does not actually fully support XML. It requires that all the fields must be present, formatted exactly as in this example, and occur exactly in the same order. The only optional field is timestamp; it defaults to 1.

3.9. xmlpipe2 data source

xmlpipe2 lets you pass arbitrary full-text and attribute data to Sphinx in yet another custom XML format. It also allows to specify the schema (ie. the set of fields and attributes) either in the XML stream itself, or in the source settings.

When indexing xmlpipe2 source, indexer runs the given command, opens a pipe to its stdout, and expects well-formed XML stream. Here's sample stream data:

Example 3. xmlpipe2 document stream

<?xml version="1.0" encoding="utf-8"?>
<sphinx:docset>

<sphinx:schema>
<sphinx:field name="subject"/> 
<sphinx:field name="content"/>
<sphinx:attr name="published" type="timestamp"/>
<sphinx:attr name="author_id" type="int" bits="16" default="1"/>
</sphinx:schema>

<sphinx:document id="1234">
<content>this is the main content
<![CDATA[[and this <cdata> entry must be handled properly by xml parser lib]]>
</content>
<published>1012325463</published>
<subject>note how field/attr tags can be in <b class="red">randomized</b> order</subject>
<misc>some undeclared element</misc>
</sphinx:document>

<!-- ... more documents here ... -->

</sphinx:docset>

class=example-break>

Arbitrary fields and attributes are allowed. They also can occur in the stream in arbitrary order within each document; the order is ignored. There is a restriction on maximum field length; fields longer than 2 MB will be truncated to 2 MB (this limit can be changed in the source).

The schema, ie. complete fields and attributes list, must be declared before any document could be parsed. This can be done either in the configuration file using xmlpipe_field and xmlpipe_attr_XXX settings, or right in the stream using <sphinx:schema> element. <sphinx:schema> is optional. It is only allowed to occur as the very first sub-element in <sphinx:docset>. If there is no in-stream schema definition, settings from the configuration file will be used. Otherwise, stream settings take precedence.

Unknown tags (which were not declared neither as fields nor as attributes) will be ignored with a warning. In the example above, <misc> will be ignored. All embedded tags and their attributes (such as <b> in <subject> in the example above) will be silently ignored.

Support for incoming stream encodings depends on whether iconv is installed on the system. xmlpipe2 is parsed using libexpat parser that understands US-ASCII, ISO-8859-1, UTF-8 and a few UTF-16 variants natively. Sphinx configure script will also check for libiconv presence, and utilize it to handle other encodings. libexpat also enforces the requirement to use UTF-8 charset on Sphinx side, because the parsed data it returns is always in UTF-8.

XML elements (tags) recognized by xmlpipe2 (and their attributes where applicable) are:

sphinx:docset
Mandatory top-level element, denotes and contains xmlpipe2 document set.
sphinx:schema
Optional element, must either occur as the very first child of sphinx:docset, or never occur at all. Declares the document schema. Contains field and attribute declarations. If present, overrides per-source settings from the configuration file.
sphinx:field
Optional element, child of sphinx:schema. Declares a full-text field. The only recognized attribute is "name", it specifies the element name that should be treated as a full-text field in the subsequent documents.
sphinx:attr
Optional element, child of sphinx:schema. Declares an attribute. Known attributes are:
  • "name", specifies the element name that should be treated as an attribute in the subsequent documents.
  • "type", specifies the attribute type. Possible values are "int", "timestamp", "str2ordinal", "bool", "float" and "multi".
  • "bits", specifies the bit size for "int" attribute type. Valid values are 1 to 32.
  • "default", specifies the default value for this attribute that should be used if the attribute's element is not present in the document.
sphinx:document
Mandatory element, must be a child of sphinx:docset. Contains arbitrary other elements with field and attribute values to be indexed, as declared either using sphinx:field and sphinx:attr elements or in the configuration file. The only known attribute is "id" that must contain the unique integer document ID.

3.10. Live index updates

There's a frequent situation when the total dataset is too big to be reindexed from scratch often, but the amount of new records is rather small. Example: a forum with a 1,000,000 archived posts, but only 1,000 new posts per day.

In this case, "live" (almost real time) index updates could be implemented using so called "main+delta" scheme.

The idea is to set up two sources and two indexes, with one "main" index for the data which only changes rarely (if ever), and one "delta" for the new documents. In the example above, 1,000,000 archived posts would go to the main index, and newly inserted 1,000 posts/day would go to the delta index. Delta index could then be reindexed very frequently, and the documents can be made available to search in a matter of minutes.

Specifying which documents should go to what index and reindexing main index could also be made fully automatical. One option would be to make a counter table which would track the ID which would split the documents, and update it whenever the main index is reindexed.

Example 4. Fully automated live updates

# in MySQL
CREATE TABLE sph_counter
(
    counter_id INTEGER PRIMARY KEY NOT NULL,
    max_doc_id INTEGER NOT NULL
);

# in sphinx.conf
source main
{
    # ...
    sql_query_pre = SET NAMES utf8
    sql_query_pre = REPLACE INTO sph_counter SELECT 1, MAX(id) FROM documents
    sql_query = SELECT id, title, body FROM documents \
        WHERE id<=( SELECT max_doc_id FROM sph_counter WHERE counter_id=1 )
}

source delta : main
{
    sql_query_pre = SET NAMES utf8
    sql_query = SELECT id, title, body FROM documents \
        WHERE id>( SELECT max_doc_id FROM sph_counter WHERE counter_id=1 )
}

index main
{
    source = main
    path = /path/to/main
    # ... all the other settings
}

# note how all other settings are copied from main,
# but source and path are overridden (they MUST be)
index delta : main
{
    source = delta
    path = /path/to/delta
}

class=example-break>

Note how we're overriding sql_query_pre in the delta source. We need to explicitly have that override. Otherwise REPLACE query would be run when indexing delta source too, effectively nullifying it. However, when we issue the directive in the inherited source for the first time, it removes all inherited values, so the encoding setup is also lost. So sql_query_pre in the delta can not just be empty; and we need to issue the encoding setup query explicitly once again.

3.11. Index merging

Merging two existing indexes can be more efficient that indexing the data from scratch, and desired in some cases (such as merging 'main' and 'delta' indexes instead of simply reindexing 'main' in 'main+delta' partitioning scheme). So indexer has an option to do that. Merging the indexes is normally faster than reindexing but still not instant on huge indexes. Basically, it will need to read the contents of both indexes once and write the result once. Merging 100 GB and 1 GB index, for example, will result in 202 GB of IO (but that's still likely less than the indexing from scratch requires).

The basic command syntax is as follows:

indexer --merge DSTINDEX SRCINDEX [--rotate]

Only the DSTINDEX index will be affected: the contents of SRCINDEX will be merged into it. --rotate switch will be required if DSTINDEX is already being served by searchd. The initially devised usage pattern is to merge a smaller update from SRCINDEX into DSTINDEX. Thus, when merging the attributes, values from SRCINDEX will win if duplicate document IDs are encountered. Note, however, that the "old" keywords will not be automatically removed in such cases. For example, if there's a keyword "old" associated with document 123 in DSTINDEX, and a keyword "new" associated with it in SRCINDEX, document 123 will be found by both keywords after the merge. You can supply an explicit condition to remove documents from DSTINDEX to mitigate that; the relevant switch is --merge-dst-range:

indexer --merge main delta --merge-dst-range deleted 0 0

This switch lets you apply filters to the destination index along with merging. There can be several filters; all of their conditions must be met in order to include the document in the resulting mergid index. In the example above, the filter passes only those records where 'deleted' is 0, eliminating all records that were flagged as deleted (for instance, using UpdateAttributes() call).

4. Searching

4.1. Matching modes

There are the following matching modes available:

  • SPH_MATCH_ALL, matches all query words (default mode);
  • SPH_MATCH_ANY, matches any of the query words;
  • SPH_MATCH_PHRASE, matches query as a phrase, requiring perfect match;
  • SPH_MATCH_BOOLEAN, matches query as a boolean expression (see Section 4.2, “Boolean query syntax”);
  • SPH_MATCH_EXTENDED, matches query as an expression in Sphinx internal query language (see Section 4.3, “Extended query syntax”). As of 0.9.9, this has been superceded by SPH_MATCH_EXTENDED2, providing additional functionality and better performance. The ident is retained for legacy application code that will continue to be compatible once Sphinx and its components, including the API, are upgraded.
  • SPH_MATCH_EXTENDED2, matches query using the second version of the Extended matching mode.
  • SPH_MATCH_FULLSCAN, matches query, forcibly using the "full scan" mode as below. NB, any query terms will be ignored, such that filters, filter-ranges and grouping will still be applied, but no text-matching.

The SPH_MATCH_FULLSCAN mode will be automatically activated in place of the specified matching mode when the following conditions are met:

  1. The query string is empty (ie. its length is zero).
  2. docinfo storage is set to extern.

In full scan mode, all the indexed documents will be considered as matching. Such queries will still apply filters, sorting, and group by, but will not perform any full-text searching. This can be useful to unify full-text and non-full-text searching code, or to offload SQL server (there are cases when Sphinx scans will perform better than analogous MySQL queries). An example of using the full scan mode might be to find posts in a forum. By selecting the forum's user ID via SetFilter() but not actually providing any search text, Sphinx will match every document (i.e. every post) where SetFilter() would match - in this case providing every post from that user. By default this will be ordered by relevancy, followed by Sphinx document ID in ascending order (earliest first).

4.2. Boolean query syntax

Boolean queries allow the following special operators to be used:

  • explicit operator AND:
    hello & world
  • operator OR:
    hello | world
  • operator NOT:
    hello -world
    hello !world
    
  • grouping:
    ( hello world )

Here's an example query which uses all these operators:

Example 5. Boolean query example

( cat -dog ) | ( cat -mouse)

class=example-break>

There always is implicit AND operator, so "hello world" query actually means "hello & world".

OR operator precedence is higher than AND, so "looking for cat | dog | mouse" means "looking for ( cat | dog | mouse )" and not "(looking for cat) | dog | mouse".

Queries like "-dog", which implicitly include all documents from the collection, can not be evaluated. This is both for technical and performance reasons. Technically, Sphinx does not always keep a list of all IDs. Performance-wise, when the collection is huge (ie. 10-100M documents), evaluating such queries could take very long.

4.3. Extended query syntax

The following special operators can be used when using the extended matching mode:

  • operator OR:
    hello | world
  • operator NOT:
    hello -world
    hello !world
    
  • field search operator:
    @title hello @body world
  • field position limit modifier (introduced in version 0.9.9):
    @body[50] hello
  • multiple-field search operator:
    @(title, body) hello world
  • all-field search operator:
    @* hello
  • phrase search operator:
    "hello world"
  • proximity search operator:
    "hello world"~10
  • quorum matching operator:
    "the world is a wonderful place"/3
  • exact form operator (introduced in version 0.9.9):
    raining =cats and =dogs

Here's an example query which uses most of these operators:

Example 6. Extended matching mode: query example

"hello world" @title "example program"~5 @body python -(php|perl) @* code

class=example-break>The full meaning of this search is:

  • Find the words 'hello' and 'world' adjacently in any field in a document;
  • Additionally, the same document must also contain the words 'example' and 'program' in the title field, with up to, but not including, 10 words between the words in question; (E.g. "example PHP program" would be matched however "example script to introduce outside data into the correct context for your program" would not because two terms have 10 or more words between them)
  • Additionally, the same document must contain the word 'python' in the body field, but not contain either 'php' or 'perl';
  • Additionally, the same document must contain the word 'code' in any field.

There always is implicit AND operator, so "hello world" means that both "hello" and "world" must be present in matching document.

OR operator precedence is higher than AND, so "looking for cat | dog | mouse" means "looking for ( cat | dog | mouse )" and not "(looking for cat) | dog | mouse".

Field position limit, introduced in version 0.9.9, additionaly restricts the searching to first N position within given field (or fields). For example, "@body[50] hello" will not match the documents where the keyword 'hello' occurs at position 51 and below in the body.

Proximity distance is specified in words, adjusted for word count, and applies to all words within quotes. For instance, "cat dog mouse"~5 query means that there must be less than 8-word span which contains all 3 words, ie. "CAT aaa bbb ccc DOG eee fff MOUSE" document will not match this query, because this span is exactly 8 words long.

Quorum matching operator introduces a kind of fuzzy matching. It will only match those documents that pass a given threshold of given words. The example above ("the world is a wonderful place"/3) will match all documents that have at least 3 of the 6 specified words.

Exact form operator, introduced in version 0.9.9, will match the document only if the keyword occurred in exactly the specified form. The default behaviour is to match the document if the stemmed keyword matches. For instance, "runs" query will match both the document that contains "runs" and the document that contains "running", because both forms stem to just "run" - while "=runs" query will only match the first document. Exact form operator requires index_exact_words option to be enabled. The operator affects the keywords, and thus can be used within phrase, proximity, or quorum operators.

Starting with 0.9.9, arbitrarily nested brackets and negations are allowed. However, the query must be possible to compute without involving an implicit list of all documents:

// correct query
aaa -(bbb -(ccc ddd))

// queries that are non-computable
-aaa
aaa | -bbb

4.4. Weighting

Specific weighting function (currently) depends on the search mode.

There are these major parts which are used in the weighting functions:

  1. phrase rank,
  2. statistical rank.

Phrase rank is based on a length of longest common subsequence (LCS) of search words between document body and query phrase. So if there's a perfect phrase match in some document then its phrase rank would be the highest possible, and equal to query words count.

Statistical rank is based on classic BM25 function which only takes word frequencies into account. If the word is rare in the whole database (ie. low frequency over document collection) or mentioned a lot in specific document (ie. high frequency over matching document), it receives more weight. Final BM25 weight is a floating point number between 0 and 1.

In all modes, per-field weighted phrase ranks are computed as a product of LCS multiplied by per-field weight speficifed by user. Per-field weights are integer, default to 1, and can not be set lower than 1.

In SPH_MATCH_BOOLEAN mode, no weighting is performed at all, every match weight is set to 1.

In SPH_MATCH_ALL and SPH_MATCH_PHRASE modes, final weight is a sum of weighted phrase ranks.

In SPH_MATCH_ANY mode, the idea is essentially the same, but it also adds a count of matching words in each field. Before that, weighted phrase ranks are additionally mutliplied by a value big enough to guarantee that higher phrase rank in any field will make the match ranked higher, even if it's field weight is low.

In SPH_MATCH_EXTENDED mode, final weight is a sum of weighted phrase ranks and BM25 weight, multiplied by 1000 and rounded to integer.

This is going to be changed, so that MATCH_ALL and MATCH_ANY modes use BM25 weights as well. This would improve search results in those match spans where phrase ranks are equal; this is especially useful for 1-word queries.

The key idea (in all modes, besides boolean) is that better subphrase matches are ranked higher, and perfect matches are pulled to the top. Author's experience is that this phrase proximity based ranking provides noticeably better search quality than any statistical scheme alone (such as BM25, which is commonly used in other search engines).

4.5. Sorting modes

There are the following result sorting modes available:

  • SPH_SORT_RELEVANCE mode, that sorts by relevance in descending order (best matches first);
  • SPH_SORT_ATTR_DESC mode, that sorts by an attribute in descending order (bigger attribute values first);
  • SPH_SORT_ATTR_ASC mode, that sorts by an attribute in ascending order (smaller attribute values first);
  • SPH_SORT_TIME_SEGMENTS mode, that sorts by time segments (last hour/day/week/month) in descending order, and then by relevance in descending order;
  • SPH_SORT_EXTENDED mode, that sorts by SQL-like combination of columns in ASC/DESC order;
  • SPH_SORT_EXPR mode, that sorts by an arithmetic expression.

SPH_SORT_RELEVANCE ignores any additional parameters and always sorts matches by relevance rank. All other modes require an additional sorting clause, with the syntax depending on specific mode. SPH_SORT_ATTR_ASC, SPH_SORT_ATTR_DESC and SPH_SORT_TIME_SEGMENTS modes require simply an attribute name. SPH_SORT_RELEVANCE is equivalent to sorting by "@weight DESC, @id ASC" in extended sorting mode, SPH_SORT_ATTR_ASC is equivalent to "attribute ASC, @weight DESC, @id ASC", and SPH_SORT_ATTR_DESC to "attribute DESC, @weight DESC, @id ASC" respectively.

SPH_SORT_TIME_SEGMENTS mode

In SPH_SORT_TIME_SEGMENTS mode, attribute values are split into so-called time segments, and then sorted by time segment first, and by relevance second.

The segments are calculated according to the current timestamp at the time when the search is performed, so the results would change over time. The segments are as follows:

  • last hour,
  • last day,
  • last week,
  • last month,
  • last 3 months,
  • everything else.

These segments are hardcoded, but it is trivial to change them if necessary.

This mode was added to support searching through blogs, news headlines, etc. When using time segments, recent records would be ranked higher because of segment, but withing the same segment, more relevant records would be ranked higher - unlike sorting by just the timestamp attribute, which would not take relevance into account at all.

SPH_SORT_EXTENDED mode

In SPH_SORT_EXTENDED mode, you can specify an SQL-like sort expression with up to 5 attributes (including internal attributes), eg:

@relevance DESC, price ASC, @id DESC

Both internal attributes (that are computed by the engine on the fly) and user attributes that were configured for this index are allowed. Internal attribute names must start with magic @-symbol; user attribute names can be used as is. In the example above, @relevance and @id are internal attributes and price is user-specified.

Known internal attributes are:

  • @id (match ID)
  • @weight (match weight)
  • @rank (match weight)
  • @relevance (match weight)
  • @random (return results in random order)

@rank and @relevance are just additional aliases to @weight.

SPH_SORT_EXPR mode

Expression sorting mode lets you sort the matches by an arbitrary arithmetic expression, involving attribute values, internal attributes (@id and @weight), arithmetic operations, and a number of built-in functions. Here's an example:

$cl->SetSortMode ( SPH_SORT_EXPR,
	"@weight + ( user_karma + ln(pageviews) )*0.1" );

The following operators and functions are supported. They are mimiced after MySQL. The functions take a number of arguments depending on the specific function.

  • Operators: +, -, *, /, <, > <=, >=, =, <>.
  • 0-argument functions: NOW().
  • Unary (1-argument) functions: ABS(), CEIL(), FLOOR(), SIN(), COS(), LN(), LOG2(), LOG10(), EXP(), SQRT(), BIGINT().
  • Binary (2-argument) functions: MIN(), MAX(), POW(), IDIV().
  • Ternary (3-argument) functions: IF().
  • Variable argument count functions: INTERVAL(), IN().

Calculations can be performed in three different modes: (a) using single-precision, 32-bit IEEE 754 floating point values (the default), (b) using signed 32-bit integers, (c) using 64-bit signed integers. The expression parser will automatically switch to integer mode if there are no operations the result in a floating point value. Otherwise, it will use the default floating point mode. For instance, "a+b" will be computed using 32-bit integers if both arguments are 32-bit integers; or using 64-bit integers if both arguments are integers but one of them is 64-bit; or in floats otherwise. However, "a/b" or "sqrt(a)" will always be computed in floats, because these operations return non-integer result. To avoid the first, you can use IDIV(). Also, "a*b" will not be automatically promoted to 64-bit when the arguments are 32-bit. To enforce 64-bit results, you can use BIGINT(). (But note that if there are non-integer operations, BIGINT() will simply be ignored.)

Comparison operators (eg. = or <=) return 1.0 when the condition is true and 0.0 otherwise. For instance, (a=b)+3 will evaluate to 4 when attribute 'a' is equal to attribute 'b', and to 3 when 'a' is not. Unlike MySQL, the equality comparisons (ie. = and <> operators) introduce a small equality threshold (1e-6 by default). If the difference between compared values is within the threshold, they will be considered equal.

All unary and binary functions are straightforward, they behave just like their mathematical counterparts. But IF() behavior needs to be explained in more detail. It takes 3 arguments, check whether the 1st argument is equal to 0.0, returns the 2nd argument if it is not zero, or the 3rd one when it is. Note that unlike comparison operators, IF() does not use a threshold! Therefore, it's safe to use comparison results as its 1st argument, but arithmetic operators might produce unexpected results. For instance, the following two calls will produce different results even though they are logically equivalent:

IF ( sqrt(3)*sqrt(3)-3<>0, a, b )
IF ( sqrt(3)*sqrt(3)-3, a, b )

In the first case, the comparison operator <> will return 0.0 (false) because of a threshold, and IF() will always return 'b' as a result. In the second one, the same sqrt(3)*sqrt(3)-3 expression will be compared with zero without threshold by the IF() function itself. But its value will be slightly different from zero because of limited floating point calculations precision. Because of that, the comparison with 0.0 done by IF() will not pass, and the second variant will return 'a' as a result.

BIGINT() function, introduced in version 0.9.9, forcibly promotes the integer argument to 64-bit type, and does nothing on floating point argument. It's intended to help enforce evaluation of certain expressions (such as "a*b") in 64-bit mode even though all the arguments are 32-bit.

IDIV() functions performs an integer division on its 2 arguments. The result is integer as well, unlike "a/b" result.

IN(expr,val1,val2,...), introduced in version 0.9.9, takes 2 or more arguments, and returns 1 if 1st argument (expr) is equal to any of the other arguments (val1..valN), or 0 otherwise. Currently, all the checked values (but not the expression itself!) are required to be constant. (Its technically possible to implement arbitrary expressions too, and that might be implemented in the future.) Constants are pre-sorted and then binary search is used, so IN() even against a big arbitrary list of constants will be very quick.

INTERVAL(expr,point1,point2,point3,...), introduced in version 0.9.9, takes 2 or more arguments, and returns the index of the argument that is less than the first argument: it returns 0 if expr<point1, 1 if point1<=expr<point2, and so on. It is required that point1<point2<...<pointN for this function to work correctly.

NOW(), introduced in version 0.9.9, is a helper function that returns current timestamp as a 32-bit integer.

4.6. Grouping (clustering) search results

Sometimes it could be useful to group (or in other terms, cluster) search results and/or count per-group match counts - for instance, to draw a nice graph of how much maching blog posts were there per each month; or to group Web search results by site; or to group matching forum posts by author; etc.

In theory, this could be performed by doing only the full-text search in Sphinx and then using found IDs to group on SQL server side. However, in practice doing this with a big result set (10K-10M matches) would typically kill performance.

To avoid that, Sphinx offers so-called grouping mode. It is enabled with SetGroupBy() API call. When grouping, all matches are assigned to different groups based on group-by value. This value is computed from specified attribute using one of the following built-in functions:

  • SPH_GROUPBY_DAY, extracts year, month and day in YYYYMMDD format from timestamp;
  • SPH_GROUPBY_WEEK, extracts year and first day of the week number (counting from year start) in YYYYNNN format from timestamp;
  • SPH_GROUPBY_MONTH, extracts month in YYYYMM format from timestamp;
  • SPH_GROUPBY_YEAR, extracts year in YYYY format from timestamp;
  • SPH_GROUPBY_ATTR, uses attribute value itself for grouping.

The final search result set then contains one best match per group. Grouping function value and per-group match count are returned along as "virtual" attributes named @group and @count respectively.

The result set is sorted by group-by sorting clause, with the syntax similar to SPH_SORT_EXTENDED sorting clause syntax. In addition to @id and @weight, group-by sorting clause may also include:

  • @group (groupby function value),
  • @count (amount of matches in group).

The default mode is to sort by groupby value in descending order, ie. by "@group desc".

On completion, total_found result parameter would contain total amount of matching groups over he whole index.

WARNING: grouping is done in fixed memory and thus its results are only approximate; so there might be more groups reported in total_found than actually present. @count might also be underestimated. To reduce inaccuracy, one should raise max_matches. If max_matches allows to store all found groups, results will be 100% correct.

For example, if sorting by relevance and grouping by "published" attribute with SPH_GROUPBY_DAY function, then the result set will contain

  • one most relevant match per each day when there were any matches published,
  • with day number and per-day match count attached,
  • sorted by day number in descending order (ie. recent days first).

4.7. Distributed searching

To scale well, Sphinx has distributed searching capabilities. Distributed searching is useful to improve query latency (ie. search time) and throughput (ie. max queries/sec) in multi-server, multi-CPU or multi-core environments. This is essential for applications which need to search through huge amounts data (ie. billions of records and terabytes of text).

The key idea is to horizontally partition (HP) searched data accross search nodes and then process it in parallel.

Partitioning is done manually. You should

  • setup several instances of Sphinx programs (indexer and searchd) on different servers;
  • make the instances index (and search) different parts of data;
  • configure a special distributed index on some of the searchd instances;
  • and query this index.

This index only contains references to other local and remote indexes - so it could not be directly reindexed, and you should reindex those indexes which it references instead.

When searchd receives a query against distributed index, it does the following:

  1. connects to configured remote agents;
  2. issues the query;
  3. sequentially searches configured local indexes (while the remote agents are searching);
  4. retrieves remote agents' search results;
  5. merges all the results together, removing the duplicates;
  6. sends the merged resuls to client.

From the application's point of view, there are no differences between usual and distributed index at all.

Any searchd instance could serve both as a master (which aggregates the results) and a slave (which only does local searching) at the same time. This has a number of uses:

  1. every machine in a cluster could serve as a master which searches the whole cluster, and search requests could be balanced between masters to achieve a kind of HA (high availability) in case any of the nodes fails;
  2. if running within a single multi-CPU or multi-core machine, there would be only 1 searchd instance quering itself as an agent and thus utilizing all CPUs/core.

It is scheduled to implement better HA support which would allow to specify which agents mirror each other, do health checks, keep track of alive agents, load-balance requests, etc.

4.8. searchd query log format

searchd logs all succesfully executed search queries into query log file. Here's an example:

[Fri Jun 29 21:17:58 2007] 0.004 sec [all/0/rel 35254 (0,20)] [lj] test
[Fri Jun 29 21:20:34 2007] 0.024 sec [all/0/rel 19886 (0,20) @channel_id] [lj] test

This log format is as follows:

[query-date] query-time [match-mode/filters-count/sort-mode
    total-matches (offset,limit) @groupby-attr] [index-name] query

Match mode can take one of the following values:

  • "all" for SPH_MATCH_ALL mode;
  • "any" for SPH_MATCH_ANY mode;
  • "phr" for SPH_MATCH_PHRASE mode;
  • "bool" for SPH_MATCH_BOOLEAN mode;
  • "ext" for SPH_MATCH_EXTENDED mode;
  • "ext2" for SPH_MATCH_EXTENDED2 mode;
  • "scan" if the full scan mode was used, either by being specified with SPH_MATCH_FULLSCAN, or if the query was empty (as documented under Matching Modes)

Sort mode can take one of the following values:

  • "rel" for SPH_SORT_RELEVANCE mode;
  • "attr-" for SPH_SORT_ATTR_DESC mode;
  • "attr+" for SPH_SORT_ATTR_ASC mode;
  • "tsegs" for SPH_SORT_TIME_SEGMENTS mode;
  • "ext" for SPH_SORT_EXTENDED mode.

Additionally, if searchd was started with --iostats, there will be a block of data after where the index(es) searched are listed.

A query log entry might take the form of:

[Fri Jun 29 21:17:58 2007] 0.004 sec [all/0/rel 35254 (0,20)] [lj] [ios=6 kb=111.1 ms=0.5] test

This additional block is information regarding I/O operations in performing the search: the number of file I/O operations carried out, the amount of data in kilobytes read from the index files and time spent on I/O operations (although there is a background processing component, the bulk of this time is the I/O operation time)

5. Command line tools reference

As mentioned elsewhere, Sphinx is not a single program called 'sphinx', but a collection of 4 separate programs which collectively form Sphinx. This section covers these tools and how to use them.

5.1. indexer command reference

indexer is the first of the two principle tools as part of Sphinx. Invoked from either the command line directly, or as part of a larger script, indexer is solely responsible for gathering the data that will be searchable.

The calling syntax for indexer is as follows:

indexer [OPTIONS] [indexname1 [indexname2 [...]]]

Essentially you would list the different possible indexes (that you would later make available to search) in sphinx.conf, so when calling indexer, as a minimum you need to be telling it what index (or indexes) you want to index.

If sphinx.conf contained details on 2 indexes, mybigindex and mysmallindex, you could do the following:

$ indexer mybigindex
$ indexer mysmallindex mybigindex

As part of the configuration file, sphinx.conf, you specify one or more indexes for your data. You might call indexer to reindex one of them, ad-hoc, or you can tell it to process all indexes - you are not limited to calling just one, or all at once, you can always pick some combination of the available indexes.

The majority of the options for indexer are given in the configuration file, however there are some options you might need to specify on the command line as well, as they can affect how the indexing operation is performed. These options are:

  • --config <file> (-c <file> for short) tells indexer to use the given file as its configuration. Normally, it will look for sphinx.conf in the installation directory (e.g. /usr/local/sphinx/etc/sphinx.conf if installed into /usr/local/sphinx), followed by the current directory you are in when calling indexer from the shell. This is most of use in shared environments where the binary files are installed somewhere like /usr/local/sphinx/ but you want to provide users with the ability to make their own custom Sphinx set-ups, or if you want to run multiple instances on a single server. In cases like those you could allow them to create their own sphinx.conf files and pass them to indexer with this option. For example:
    $ indexer --config /home/myuser/sphinx.conf myindex
    
  • --all tells indexer to update every index listed in sphinx.conf, instead of listing individual indexes. This would be useful in small configurations, or cron-type or maintenance jobs where the entire index set will get rebuilt each day, or week, or whatever period is best. Example usage:
    $ indexer --config /home/myuser/sphinx.conf --all
    
  • --rotate is used for rotating indexes. Unless you have the situation where you can take the search function offline without troubling users, you will almost certainly need to keep search running whilst indexing new documents. --rotate creates a second index, parallel to the first (in the same place, simply including .new in the filenames). Once complete, indexer notifies searchd via sending the SIGHUP signal, and searchd will attempt to rename the indexes (renaming the existing ones to include .old and renaming the .new to replace them), and then start serving from the newer files. Depending on the setting of seamless_rotate, there may be a slight delay in being able to search the newer indexes. Example usage:
    $ indexer --rotate --all
    
  • --quiet tells indexer not to output anything, unless there is an error. Again, most used for cron-type, or other script jobs where the output is irrelevant or unnecessary, except in the event of some kind of error. Example usage:
    $ indexer --rotate --all --quiet
    
  • --noprogress does not display progress details as they occur; instead, the final status details (such as documents indexed, speed of indexing and so on are only reported at completion of indexing. In instances where the script is not being run on a console (or 'tty'), this will be on by default. Example usage:
    $ indexer --rotate --all --noprogress
    
  • --buildstops <outputfile.text> <N> reviews the index source, as if it were indexing the data, and produces a list of the terms that are being indexed. In other words, it produces a list of all the searchable terms that are becoming part of the index. Note; it does not update the index in question, it simply processes the data 'as if' it were indexing, including running queries defined with sql_query_pre or sql_query_post. outputfile.txt will contain the list of words, one per line, sorted by frequency with most frequent first, and N specifies the maximum number of words that will be listed; if sufficiently large to encompass every word in the index, only that many words will be returned. Such a dictionary list could be used for client application features around "Did you mean..." functionality, usually in conjunction with --buildfreqs, below. Example:
    $ indexer myindex --buildstops word_freq.txt 1000
    
    This would produce a document in the current directory, word_freq.txt with the 1,000 most common words in 'myindex', ordered by most common first. Note that the file will pertain to the last index indexed when specified with multiple indexes or --all (i.e. the last one listed in the configuration file)
  • --buildfreqs works with --buildstops (and is ignored if --buildstops is not specified). As --buildstops provides the list of words used within the index, --buildfreqs adds the quantity present in the index, which would be useful in establishing whether certain words should be considered stopwords if they are too prevalent. It will also help with developing "Did you mean..." features where you can how much more common a given word compared to another, similar one. Example:
    $ indexer myindex --buildstops word_freq.txt 1000 --buildfreqs
    
    This would produce the word_freq.txt as above, however after each word would be the number of times it occurred in the index in question.
  • --merge <dst-index> <src-index> is used for physically merging indexes together, for example if you have a main+delta scheme, where the main index rarely changes, but the delta index is rebuilt frequently, and --merge would be used to combine the two. The operation moves from right to left - the contents of src-index get examined and physically combined with the contents of dst-index and the result is left in dst-index. In pseudo-code, it might be expressed as: dst-index += src-index An example:
    $ indexer --merge main delta --rotate
    
    In the above example, where the main is the master, rarely modified index, and delta is the less frequently modified one, you might use the above to call indexer to combine the contents of the delta into the main index and rotate the indexes.
  • --merge-dst-range <attr> <min> <max> runs the filter range given upon merging. Specifically, as the merge is applied to the destination index (as part of --merge, and is ignored if --merge is not specified), indexer will also filter the documents ending up in the destination index, and only documents will pass through the filter given will end up in the final index. This could be used for example, in an index where there is a 'deleted' attribute, where 0 means 'not deleted'. Such an index could be merged with:
    $ indexer --merge main delta --merge-dst-range deleted 0 0
    
    Any documents marked as deleted (value 1) would be removed from the newly-merged destination index. It can be added several times to the command line, to add successive filters to the merge, all of which must be met in order for a document to become part of the final index.

5.2. searchd command reference

searchd is the second of the two principle tools as part of Sphinx. searchd is the part of the system which actually handles searches; it functions as a server and is responsible for receiving queries, processing them and returning a dataset back to the different APIs for client applications.

Unlike indexer, searchd is not designed to be run either from a regular script or command-line calling, but instead either as a daemon to be called from init.d (on Unix/Linux type systems) or to be called as a service (on Windows-type systems), so not all of the command line options will always apply, and so will be build-dependent.

Calling searchd is simply a case of:

$ searchd [OPTIONS]

The options available to searchd on all builds are:

  • --help (-h for short) lists all of the parameters that can be called in your particular build of searchd.
  • --config <file> (-c <file> for short) tells searchd to use the given file as its configuration, just as with indexer above.
  • --stop is used to stop searchd, using the details of the PID file as specified in the sphinx.conf file, so you may also need to confirm to searchd which configuration file to use with the --config option. NB, calling --stop will also make sure any changes applied to the indexes with UpdateAttributes() will be applied to the index files themselves. Example:
    $ searchd --config /home/myuser/sphinx.conf --stop
    
  • --pidfile is used to explicitly state a PID file, where the process information is stored regarding searchd, used for inter-process communications (for example, indexer will need to know the PID to contact searchd for rotating indexes). Normally, searchd would use a PID if running in regular mode (i.e. not with --console), but it is possible that you will be running it in console mode whilst the index is being updated and rotated, for which a PID file will be needed.
    $ searchd --config /home/myuser/sphinx.conf --pidfile /home/myuser/sphinx.pid
    
  • --console is used to force searchd into console mode; typically it will be running as a conventional server application, and will aim to dump information into the log files (as specified in sphinx.conf). Sometimes though, when debugging issues in the configuration or the daemon itself, or trying to diagnose hard-to-track-down problems, it may be easier to force it to dump information directly to the console/command line from which it is being called. Running in console mode also means that the process will not be forked (so searches are done in sequence) and logs will not be written to. (It should be noted that console mode is not the intended method for running searchd) You can invoke it as such:
    $ searchd --config /home/myuser/sphinx.conf --console
    
  • --iostats is used in conjuction with the logging options (the query_log will need to have been activated in sphinx.conf) to provide more detailed information on a per-query basis as to the input/output operations carried out in the course of that query, with a slight performance hit and of course bigger logs. Further details are available under the query log format section. You might start searchd thus:
    $ searchd --config /home/myuser/sphinx.conf --iostats
    
  • --port portnumber (-p for short) is used to specify the post that searchd should listen on, usually for debugging purposes. This will usually default to 3312, but sometimes you need to run it on a different port. Specifying it on the command line will override anything specified in the configuration file. The valid range is 0 to 65535, but ports numbered 1024 and below usually require a privileged account in order to run. An example of usage:
    $ searchd --port 3313
    
  • --index <index> forces this instance of searchd only to serve the specified index. Like --port, above, this is usually for debugging purposes; more long-term changes would generally be applied to the configuration file itself. Example usage:
    $ searchd --index myindex
    

There are some options for searchd that are specific to Windows platforms, concerning handling as a service, are only be available on Windows binaries.

Note that on Windows searchd will default to --console mode, unless you install it as a service.

  • --install installs searchd as a service into the Microsoft Management Console (Control Panel / Administrative Tools / Services). Any other parameters specified on the command line, where --install is specified will also become part of the command line on future starts of the service. For example, as part of calling searchd, you will likely also need to specify the configuration file with --config, and you would do that as well as specifying --install. Once called, the usual start/stop facilities will become available via the management console, so any methods you could use for starting, stopping and restarting services would also apply to searchd. Example:
    C:\> C:\Sphinx\bin\searchd.exe --install --config C:\Sphinx\sphinx.conf
    
    If you wanted to have the I/O stats every time you started searchd, you would specify its option on the same line as the --install command thus:
    C:\> C:\Sphinx\bin\searchd.exe --install --config C:\Sphinx\sphinx.conf --iostats
    
  • --delete removes the service from the Microsoft Management Console and other places where services are registered, after previously installed with --install. Note, this does not uninstall the software or delete the indexes. It means the service will not be called from the services systems, and will not be started on the machine's next start. If currently running as a service, the current instance will not be terminated (until the next reboot, or searchd is called with --stop). If the service was installed with a custom name (with --servicename), the same name will need to be specified with --servicename when calling to uninstall. Example:
    C:\> C:\Sphinx\bin\searchd.exe --delete
    
  • --servicename <name> applies the given name to searchd when installing or deleting the service, as would appear in the Management Console; this will default to searchd, but if being deployed on servers where multiple administrators may log into the system, or a system with multiple searchd instances, a more descriptive name may be applicable. Note that unless combined with --install or --delete, this option does not do anything. Example:
    C:\> C:\Sphinx\bin\searchd.exe --install --config C:\Sphinx\sphinx.conf --servicename SphinxSearch
    
  • --ntservice is the option that is passed by the Management Console to searchd to invoke it as a service on Windows platforms. It would not normally be necessary to call this directly; this would normally be called by Windows when the service would be started, although if you wanted to call this as a regular service from the command-line (as the complement to --console) you could do so in theory.

5.3. search command reference

search is one of the two less prominent tools within the Sphinx package. Whereas searchd is responsible for searches in a server-type environment, search is aimed at testing the index from the command line, and testing the index quickly without building a framework to make the connection to the server and process its response.

Note: search is not intended to be deployed as part of a client application; it is strongly recommended you do not write an interface to search instead of searchd, and none of the bundled client APIs support this method. (In any event, search will reload files each time, whereas searchd will cache them in memory for performance.)

That said, many types of query that you could build in the APIs could also be made with search, however for very complex searches it may be easier to construct them using a small script and the corresponding API. Additionally, some newer features may be available in the searchd system that have not yet been brought into search.

The calling syntax for search is as follows:

search [OPTIONS] word1 [word2 [word3 [...]]]

When calling search, it is not necessary to have searchd running; simply that the account running search has read access to the configuration file and the location and files of the indexes.

The default behaviour is to apply a search for word1 (AND word2 AND word3... as specified) to all fields in all indexes as given in the configuration file. If constructing the equivalent in the API, this would be the equivalent to passing SPH_MATCH_ALL to SetMatchMode, and specifying * as the indexes to query as part of Query.

There are many options available to search. Firstly, the general options:

  • --config <file> (-c <file> for short) tells search to use the given file as its configuration, just as with indexer above.
  • --index <index> (-i <index> for short) tells search to limit searching to the specified index only; normally it would attempt to search all of the physical indexes listed in sphinx.conf, not any distributed ones.
  • --stdin tells search to accept the query from the standard input, rather than the command line. This can be useful for testing purposes whereby you could feed input via pipes and from scripts.

Options for setting matches:

  • --any (-a for short) changes the matching mode to match any of the words as part of the query (word1 OR word2 OR word3). In the API this would be equivalent to passing SPH_MATCH_ANY to SetMatchMode.
  • --phrase (-p for short) changes the matching mode to match all of the words as part of the query, and do so in the phrase given (not including punctuation). In the API this would be equivalent to passing SPH_MATCH_PHRASE to SetMatchMode.
  • --boolean (-b for short) changes the matching mode to Boolean matching. Note if using Boolean syntax matching on the command line, you may need to escape the symbols (with a backslash) to avoid the shell/command line processor applying them, such as ampersands being escaped on a Unix/Linux system to avoid it forking to the search process, although this can be resolved by using --stdin, as below. In the API this would be equivalent to passing SPH_MATCH_BOOLEAN to SetMatchMode.
  • --ext (-e for short) changes the matching mode to Extended matching. In the API this would be equivalent to passing SPH_MATCH_EXTENDED to SetMatchMode, and it should be noted that use of this mode is being discouraged in favour of Extended2, below.
  • --ext2 (-e2 for short) changes the matching mode to Extended matching, version 2. In the API this would be equivalent to passing SPH_MATCH_EXTENDED2 to SetMatchMode, and it should be noted that use of this mode is being recommended in favour of Extended, due to being more efficient and providing other features.
  • --filter <attr> <v> (-f <attr> <v> for short) filters the results such that only documents where the attribute given (attr) matches the value given (v). For example, --filter deleted 0 only matches documents with an attribute called 'deleted' where its value is 0. You can also add multiple filters on the command line, by specifying multiple --filter multiple times, however if you apply a second filter to an attribute it will override the first defined filter.

Options for handling the results:

  • --limit <count> (-l count for short) limits the total number of matches back to the number given. If a 'group' is specified, this will be the number of grouped results. This defaults to 20 results if not specified (as do the APIs)
  • --offset <count> (-o <count> for short) offsets the result list by the number of places set by the count; this would be used for pagination through results, where if you have 20 results per 'page', the second page would begin at offset 20, the third page at offset 40, etc.
  • --group <attr> (-g <attr> for short) specifies that results should be grouped together based on the attribute specified. Like the GROUP BY clause in SQL, it will combine all results where the attribute given matches, and returns a set of results where each returned result is the best from each group. Unless otherwise specified, this will be the best match on relevance.
  • --groupsort <expr> (-gs <expr> for short) instructs that when results are grouped with -group, the expression given in <expr> shall determine the order of the groups. Note, this does not specify which is the best item within the group, only the order in which the groups themselves shall be returned.
  • --sortby <clause> (-s <clause> for short) specifies that results should be sorted in the order listed in <clause>. This allows you to specify the order you wish results to be presented in, ordering by different columns. For example, you could say --sortby "@weight DESC entrytime DESC" to sort entries first by weight (or relevance) and where two or more entries have the same weight, to then sort by the time with the highest time (newest) first. You will usually need to put the items in quotes (--sortby "@weight DESC") or use commas (--sortby @weight,DESC) to avoid the items being treated separately. Additionally, like the regular sorting modes, if --group (grouping) is being used, this will state how to establish the best match within each group.
  • --sortexpr expr (-S expr for short) specifies that the search results should be presented in an order determined by an arithmetic expression, stated in expr. For example: --sortexpr "@weight + ( user_karma + ln(pageviews) )*0.1" (again noting that this will have to be quoted to avoid the shell dealing with the asterisk). Extended sort mode is discussed in more detail under the SPH_SORT_EXTENDED entry under the Sorting modes chapter of the manual.
  • --sort=date specifies that the results should be sorted by descending (i.e. most recent first) date. This requires that there is an attribute in the index that is set as a timestamp.
  • --rsort=date specifies that the results should be sorted by ascending (i.e. oldest first) date. This requires that there is an attribute in the index that is set as a timestamp.
  • --sort=ts specifies that the results should be sorted by timestamp in groups; it will return all of the documents whose timestamp is within the last hour, then sorted within that bracket for relevance. After, it would return the documents from the last day, sorted by relevance, then the last week and then the last month. It is discussed in more detail under the SPH_SORT_TIME_SEGMENTS entry under the Sorting modes chapter of the manual.

Other options:

  • --noinfo (-q for short) instructs search not to look-up data in your SQL database. Specifically, for debugging with MySQL and search, you can provide it with a query to look up the full article based on the returned document ID. It is explained in more detail under the sql_query_info directive.
  • --dumpheader path instructs search to report the entire attribute and field list for a given index; you would need to specify the full path to the .sph file that is part of the index you wish to examine. It will provide a breakdown of how the index is constructed, which can be useful for debugging.

5.4. spelldump command reference

spelldump is the second of the two less prominent tools within the Sphinx package.

It is used to extract the contents of a dictionary file that uses ispell format, which can help build word lists for wordforms - all of the possible forms are pre-built for you.

Its general usage is:

spelldump [options] <dictionary> <affix> [result] [locale-name]

The two main parameters are the dictionary's main file and its affix file; usually these are named as [language-prefix].dict and [language-prefix].aff and will be available with most common Linux distributions, as well as various places online.

[result] specifies where the dictionary data should be output to, and [locale-name] additionally specifies the locale details you wish to use.

There is an additional option, -c [file], which specifies a file for case conversion details.

Examples of its usage are:

spelldump en.dict en.aff
spelldump ru.dict ru.aff ru.txt ru_RU.CP1251
spelldump ru.dict ru.aff ru.txt .1251

The results file will contain a list of all the words in the dictionary in alphabetic order, output in the format of a wordforms file, which you can use to customise for your specific circumstances. An example of the result file:

zone > zone
zoned > zoned
zoning > zoning

6. API reference

There is a number of native searchd client API implementations for Sphinx. As of time of this writing, we officially support our own PHP, Python, and Java implementations. There also are third party free, open-source API implementations for Perl, Ruby, and C++.

The reference API implementation is in PHP, because (we believe) Sphinx is most widely used with PHP than any other language. This reference documentation is in turn based on reference PHP API, and all code samples in this section will be given in PHP.

However, all other APIs provide the same methods and implement the very same network protocol. Therefore the documentation does apply to them as well. There might be minor differences as to the method naming conventions or specific data structures used. But the provided functionality must not differ across languages.

6.1. General API functions

6.1.1. GetLastError

Prototype: function GetLastError()

Returns last error message, as a string, in human readable format. If there were no errors during the previous API call, empty string is returned.

You should call it when any other function (such as Query()) fails (typically, the failing function returns false). The returned string will contain the error description.

The error message is not reset by this call; so you can safely call it several times if needed.

6.1.2. GetLastWarning

Prototype: function GetLastWarning ()

Returns last warning message, as a string, in human readable format. If there were no warnings during the previous API call, empty string is returned.

You should call it to verify whether your request (such as Query()) was completed but with warnings. For instance, search query against a distributed index might complete succesfully even if several remote agents timed out. In that case, a warning message would be produced.

The warning message is not reset by this call; so you can safely call it several times if needed.

6.1.3. SetServer

Prototype: function SetServer ( $host, $port )

Sets searchd host name and TCP port. All subsequent requests will use the new host and port settings. Default host and port are 'localhost' and 3312, respectively.

6.1.4. SetRetries

Prototype: function SetRetries ( $count, $delay=0 )

Sets distributed retry count and delay.

On temporary failures searchd will attempt up to $count retries per agent. $delay is the delay between the retries, in milliseconds. Retries are disabled by default. Note that this call will not make the API itself retry on temporary failure; it only tells searchd to do so. Currently, the list of temporary failures includes all kinds of connect() failures and maxed out (too busy) remote agents.

6.1.5. SetConnectTimeout

Prototype: function SetConnectTimeout ( $timeout )

Sets the time allowed to spend connecting to the server before giving up.

Under some circumstances, the server can be delayed in responding, either due to network delays, or a query backlog. In either instance, this allows the client application programmer some degree of control over how their program interacts with searchd when not available, and can ensure that the client application does not fail due to exceeding the script execution limits (especially in PHP).

In the event of a failure to connect, an appropriate error code should be returned back to the application in order for application-level error handling to advise the user.

6.1.6. SetArrayResult

Prototype: function SetArrayResult ( $arrayresult )

PHP specific. Controls matches format in the search results set (whether matches should be returned as an array or a hash).

$arrayresult argument must be boolean. If $arrayresult is false (the default mode), matches will returned in PHP hash format with document IDs as keys, and other information (weight, attributes) as values. If $arrayresult is true, matches will be returned as a plain array with complete per-match information including document ID.

Introduced along with GROUP BY support on MVA attributes. Group-by-MVA result sets may contain duplicate document IDs. Thus they need to be returned as plain arrays, because hashes will only keep one entry per document ID.

6.1.7. IsConnectError

Prototype: function IsConnectError ()

Checks whether the last error was a network error on API side, or a remote error reported by searchd. Returns true if the last connection attempt to searchd failed on API side, false otherwise (if the error was remote, or there were no connection attempts at all). Introduced in version 0.9.9.

6.2. General query settings

6.2.1. SetLimits

Prototype: function SetLimits ( $offset, $limit, $max_matches=0, $cutoff=0 )

Sets offset into server-side result set ($offset) and amount of matches to return to client starting from that offset ($limit). Can additionally control maximum server-side result set size for current query ($max_matches) and the threshold amount of matches to stop searching at ($cutoff). All parameters must be non-negative integers.

First two parameters to SetLimits() are identical in behavior to MySQL LIMIT clause. They instruct searchd to return at most $limit matches starting from match number $offset. The default offset and limit settings are 0 and 20, that is, to return first 20 matches.

max_matches setting controls how much matches searchd will keep in RAM while searching. All matching documents will be normally processed, ranked, filtered, and sorted even if max_matches is set to 1. But only best N documents are stored in memory at any given moment for performance and RAM usage reasons, and this setting controls that N. Note that there are two places where max_matches limit is enforced. Per-query limit is controlled by this API call, but there also is per-server limit controlled by max_matches setting in the config file. To prevent RAM usage abuse, server will not allow to set per-query limit higher than the per-server limit.

You can't retrieve more than max_matches matches to the client application. The default limit is set to 1000. Normally, you must not have to go over this limit. One thousand records is enough to present to the end user. And if you're thinking about pulling the results to application for further sorting or filtering, that would be much more efficient if performed on Sphinx side.

$cutoff setting is intended for advanced performance control. It tells searchd to forcibly stop search query once $cutoff matches had been found and processed.

6.2.2. SetMaxQueryTime

Prototype: function SetMaxQueryTime ( $max_query_time )

Sets maximum search query time, in milliseconds. Parameter must be a non-negative integer. Default valus is 0 which means "do not limit".

Similar to $cutoff setting from SetLimits(), but limits elapsed query time instead of processed matches count. Local search queries will be stopped once that much time has elapsed. Note that if you're performing a search which queries several local indexes, this limit applies to each index separately.

6.2.3. SetOverride

Prototype: function SetOverride ( $attrname, $attrtype, $values )

Sets temporary (per-query) per-document attribute value overrides. Only supports scalar attributes. $values must be a hash that maps document IDs to overridden attribute values. Introduced in version 0.9.9.

Override feature lets you "temporary" update attribute values for some documents within a single query, leaving all other queries unaffected. This might be useful for personalized data. For example, assume you're implementing a personalized search function that wants to boost the posts that the user's friends recommend. Such data is not just dynamic, but also personal; so you can't simply put it in the index because you don't want everyone's searches affected. Overrides, on the other hand, are local to a single query and invisible to everyone else. So you can, say, setup a "friends_weight" value for every document, defaulting to 0, then temporary override it with 1 for documents 123, 456 and 789 (recommended by exactly the friends of current user), and use that value when ranking.

6.2.4. SetSelect

Prototype: function SetSelect ( $clause )

Sets the select clause, listing specific attributes to fetch, and expressions to compute and fetch. Clause syntax mimics SQL. Introduced in version 0.9.9.

SetSelect() is very similar to the part of a typical SQL query between SELECT and FROM. It lets you choose what attributes (columns) to fetch, and also what expressions over the columns to compute and fetch. A certain difference from SQL is that expressions must always be aliased to a correct identifier (consisting of letters and digits) using 'AS' keyword. SQL also lets you do that but does not require to. Sphinx enforces aliases so that the computation results can always be returned under a "normal" name in the result set, used in other clauses, etc.

Everything else is basically identical to SQL. Star ('*') is supported. Functions are supported. Arbitrary amount of expressions is supported. Computed expressions can be used for sorting, filtering, and grouping, just as the regular attributes.

Expression sorting (Section 4.5, “SPH_SORT_EXPR mode”) and geodistance functions (Section 6.4.5, “SetGeoAnchor”) are now internally implemented using this computed expressions mechanism, using magic names '@expr' and '@geodist' respectively.

Example:
$cl->SetSelect ( "*, @weight+(user_karma+ln(pageviews))*0.1 AS myweight" );
$cl->SetSelect ( "exp_years, salary_gbp*{$gbp_usd_rate} AS salary_usd, IF(age>40,1,0) AS over40" );

6.3. Full-text search query settings

6.3.1. SetMatchMode

Prototype: function SetMatchMode ( $mode )

Sets full-text query matching mode, as described in Section 4.1, “Matching modes”. Parameter must be a constant specifying one of the known modes.

WARNING: (PHP specific) you must not take the matching mode constant name in quotes, that syntax specifies a string and is incorrect:

$cl->SetMatchMode ( "SPH_MATCH_ANY" ); // INCORRECT! will not work as expected
$cl->SetMatchMode ( SPH_MATCH_ANY ); // correct, works OK

6.3.2. SetRankingMode

Prototype: function SetRankingMode ( $ranker )

Sets ranking mode. Only available in SPH_MATCH_EXTENDED2 matching mode at the time of this writing. Parameter must be a constant specifying one of the known modes.

By default, Sphinx computes two factors which contribute to the final match weight. The major part is query phrase proximity to document text. The minor part is so-called BM25 statistical function, which varies from 0 to 1 depending on the keyword frequency within document (more occurrences yield higher weight) and within the whole index (more rare keywords yield higher weight).

However, in some cases you'd want to compute weight differently - or maybe avoid computing it at all for performance reasons because you're sorting the result set by something else anyway. This can be accomplished by setting the appropriate ranking mode.

Currently implemented modes are:

  • SPH_RANK_PROXIMITY_BM25, default ranking mode which uses and combines both phrase proximity and BM25 ranking.
  • SPH_RANK_BM25, statistical ranking mode which uses BM25 ranking only (similar to most other full-text engines). This mode is faster but may result in worse quality on queries which contain more than 1 keyword.
  • SPH_RANK_NONE, disabled ranking mode. This mode is the fastest. It is essentially equivalent to boolean searching. A weight of 1 is assigned to all matches.
  • SPH_RANK_WORDCOUNT, ranking by keyword occurrences count. This ranker computes the amount of per-field keyword occurrences, then multiplies the amounts by field weights, then sums the resulting values for the final result.
  • SPH_RANK_PROXIMITY, added in version 0.9.9, returns raw phrase proximity value as a result. This mode is internally used to emulate SPH_MATCH_ALL queries.
  • SPH_RANK_MATCHANY, added in version 0.9.9, returns rank as it was computed in SPH_MATCH_ANY mode ealier, and is internally used to emulate SPH_MATCH_ANY queries.

6.3.3. SetSortMode

Prototype: function SetSortMode ( $mode, $sortby="" )

Set matches sorting mode, as described in Section 4.5, “Sorting modes”. Parameter must be a constant specifying one of the known modes.

WARNING: (PHP specific) you must not take the matching mode constant name in quotes, that syntax specifies a string and is incorrect:

$cl->SetSortMode ( "SPH_SORT_ATTR_DESC" ); // INCORRECT! will not work as expected
$cl->SetSortMode ( SPH_SORT_ATTR_ASC ); // correct, works OK

6.3.4. SetWeights

Prototype: function SetWeights ( $weights )

Binds per-field weights in the order of appearance in the index. DEPRECATED, use SetFieldWeights() instead.

6.3.5. SetFieldWeights

Prototype: function SetFieldWeights ( $weights )

Binds per-field weights by name. Parameter must be a hash (associative array) mapping string field names to integer weights.

Match ranking can be affected by per-field weights. For instance, see Section 4.4, “Weighting” for an explanation how phrase proximity ranking is affected. This call lets you specify what non-default weights to assign to different full-text fields.

The weights must be positive 32-bit integers. The final weight will be a 32-bit integer too. Default weight value is 1. Unknown field names will be silently ignored.

There is no enforced limit on the maximum weight value at the moment. However, beware that if you set it too high you can start hitting 32-bit wraparound issues. For instance, if you set a weight of 10,000,000 and search in extended mode, then maximum possible weight will be equal to 10 million (your weight) by 1 thousand (internal BM25 scaling factor, see Section 4.4, “Weighting”) by 1 or more (phrase proximity rank). The result is at least 10 billion that does not fit in 32 bits and will be wrapped around, producing unexpected results.

6.3.6. SetIndexWeights

Prototype: function SetIndexWeights ( $weights )

Sets per-index weights, and enables weighted summing of match weights across different indexes. Parameter must be a hash (associative array) mapping string index names to integer weights. Default is empty array that means to disable weighting summing.

When a match with the same document ID is found in several different local indexes, by default Sphinx simply chooses the match from the index specified last in the query. This is to support searching through partially overlapping index partitions.

However in some cases the indexes are not just partitions, and you might want to sum the weights across the indexes instead of picking one. SetIndexWeights() lets you do that. With summing enabled, final match weight in result set will be computed as a sum of match weight coming from the given index multiplied by respective per-index weight specified in this call. Ie. if the document 123 is found in index A with the weight of 2, and also in index B with the weight of 3, and you called SetIndexWeights ( array ( "A"=>100, "B"=>10 ) ), the final weight return to the client will be 2*100+3*10 = 230.

6.4. Result set filtering settings

6.4.1. SetIDRange

Prototype: function SetIDRange ( $min, $max )

Sets an accepted range of document IDs. Parameters must be integers. Defaults are 0 and 0; that combination means to not limit by range.

After this call, only those records that have document ID between $min and $max (including IDs exactly equal to $min or $max) will be matched.

6.4.2. SetFilter

Prototype: function SetFilter ( $attribute, $values, $exclude=false )

Adds new integer values set filter.

On this call, additional new filter is added to the existing list of filters. $attribute must be a string with attribute name. $values must be a plain array containing integer values. $exclude must be a boolean value; it controls whether to accept the matching documents (default mode, when $exclude is false) or reject them.

Only those documents where $attribute column value stored in the index matches any of the values from $values array will be matched (or rejected, if $exclude is true).

6.4.3. SetFilterRange

Prototype: function SetFilterRange ( $attribute, $min, $max, $exclude=false )

Adds new integer range filter.

On this call, additional new filter is added to the existing list of filters. $attribute must be a string with attribute name. $min and $max must be integers that define the acceptable attribute values range (including the boundaries). $exclude must be a boolean value; it controls whether to accept the matching documents (default mode, when $exclude is false) or reject them.

Only those documents where $attribute column value stored in the index is between $min and $max (including values that are exactly equal to $min or $max) will be matched (or rejected, if $exclude is true).

6.4.4. SetFilterFloatRange

Prototype: function SetFilterFloatRange ( $attribute, $min, $max, $exclude=false )

Adds new float range filter.

On this call, additional new filter is added to the existing list of filters. $attribute must be a string with attribute name. $min and $max must be floats that define the acceptable attribute values range (including the boundaries). $exclude must be a boolean value; it controls whether to accept the matching documents (default mode, when $exclude is false) or reject them.

Only those documents where $attribute column value stored in the index is between $min and $max (including values that are exactly equal to $min or $max) will be matched (or rejected, if $exclude is true).

6.4.5. SetGeoAnchor

Prototype: function SetGeoAnchor ( $attrlat, $attrlong, $lat, $long )

Sets anchor point for and geosphere distance (geodistance) calculations, and enable them.

$attrlat and $attrlong must be strings that contain the names of latitude and longitude attributes, respectively. $lat and $long are floats that specify anchor point latitude and longitude, in radians.

Once an anchor point is set, you can use magic "@geodist" attribute name in your filters and/or sorting expressions. Sphinx will compute geosphere distance between the given anchor point and a point specified by latitude and lognitude attributes from each full-text match, and attach this value to the resulting match. The latitude and longitude values both in SetGeoAnchor and the index attribute data are expected to be in radians. The result will be returned in meters, so geodistance value of 1000.0 means 1 km. 1 mile is approximately 1609.344 meters.

6.5. GROUP BY settings

6.5.1. SetGroupBy

Prototype: function SetGroupBy ( $attribute, $func, $groupsort="@group desc" )

Sets grouping attribute, function, and groups sorting mode; and enables grouping (as described in Section 4.6, “Grouping (clustering) search results ”).

$attribute is a string that contains group-by attribute name. $func is a constant that chooses a function applied to the attribute value in order to compute group-by key. $groupsort is a clause that controls how the groups will be sorted. Its syntax is similar to that described in Section 4.5, “SPH_SORT_EXTENDED mode”.

Grouping feature is very similar in nature to GROUP BY clause from SQL. Results produces by this function call are going to be the same as produced by the following pseudo code:

SELECT ... GROUP BY $func($attribute) ORDER BY $groupsort

Note that it's $groupsort that affects the order of matches in the final result set. Sorting mode (see Section 6.3.3, “SetSortMode”) affect the ordering of matches within group, ie. what match will be selected as the best one from the group. So you can for instance order the groups by matches count and select the most relevant match within each group at the same time.

6.5.2. SetGroupDistinct

Prototype: function SetGroupDistinct ( $attribute )

Sets attribute name for per-group distinct values count calculations. Only available for grouping queries.

$attribute is a string that contains the attribute name. For each group, all values of this attribute will be stored (as RAM limits permit), then the amount of distinct values will be calculated and returned to the client. This feature is similar to COUNT(DISTINCT) clause in standard SQL; so these Sphinx calls:

$cl->SetGroupBy ( "category", SPH_GROUPBY_ATTR, "@count desc" );
$cl->SetGroupDistinct ( "vendor" );

can be expressed using the following SQL clauses:

SELECT id, weight, all-attributes,
	COUNT(DISTINCT vendor) AS @distinct,
	COUNT(*) AS @count
FROM products
GROUP BY category
ORDER BY @count DESC

In the sample pseudo code shown just above, SetGroupDistinct() call corresponds to COUNT(DISINCT vendor) clause only. GROUP BY, ORDER BY, and COUNT(*) clauses are all an equivalent of SetGroupBy() settings. Both queries will return one matching row for each category. In addition to indexed attributes, matches will also contain total per-category matches count, and the count of distinct vendor IDs within each category.

6.6. Querying

6.6.1. Query

Prototype: function Query ( $query, $index="*", $comment="" )

Connects to searchd server, runs given search query with current settings, obtains and returns the result set.

$query is a query string. $index is an index name (or names) string. Returns false and sets GetLastError() message on general error. Returns search result set on success. Additionally, the contents of $comment are sent to the query log, marked in square brackets, just before the search terms, which can be very useful for debugging. Currently, the comment is limited to 128 characters.

Default value for $index is "*" that means to query all local indexes. Characters allowed in index names include Latin letters (a-z), numbers (0-9), minus sign (-), and underscore (_); everything else is considered a separator. Therefore, all of the following samples calls are valid and will search the same two indexes:

$cl->Query ( "test query", "main delta" );
$cl->Query ( "test query", "main;delta" );
$cl->Query ( "test query", "main, delta" );

Index specification order matters. If document with identical IDs are found in two or more indexes, weight and attribute values from the very last matching index will be used for sorting and returning to client (unless explicitly overridden with SetIndexWeights()). Therefore, in the example above, matches from "delta" index will always win over matches from "main".

On success, Query() returns a result set that contains some of the found matches (as requested by SetLimits()) and additional general per-query statistics. The result set is a hash (PHP specific; other languages might utilize other structures instead of hash) with the following keys and values:

"matches":
Hash which maps found document IDs to another small hash containing document weight and attribute values (or an array of the similar small hashes if SetArrayResult() was enabled).
"total":
Total amount of matches retrieved on server (ie. to the server side result set) by this query. You can retrieve up to this amount of matches from server for this query text with current query settings.
"total_found":
Total amount of matching documents in index (that were found and procesed on server).
"words":
Hash which maps query keywords (case-folded, stemmed, and otherwise processed) to a small hash with per-keyword statitics ("docs", "hits").
"error":
Query error message reported by searchd (string, human readable). Empty if there were no errors.
"warning":
Query warning message reported by searchd (string, human readable). Empty if there were no warnings.

It should be noted that Query() carries out the same actions as AddQuery() and RunQueries() without the intermediate steps; it is analoguous to a single AddQuery() call, followed by a corresponding RunQueries(), then returning the first array element of matches (from the first, and only, query.)

6.6.2. AddQuery

Prototype: function AddQuery ( $query, $index="*", $comment="" )

Adds additional query with current settings to multi-query batch. $query is a query string. $index is an index name (or names) string. Additionally if provided, the contents of $comment are sent to the query log, marked in square brackets, just before the search terms, which can be very useful for debugging. Currently, this is limited to 128 characters. Returns index to results array returned from RunQueries().

Batch queries (or multi-queries) enable searchd to perform internal optimizations if possible. They also reduce network connection overheads and search process creation overheads in all cases. They do not result in any additional overheads compared to simple queries. Thus, if you run several different queries from your web page, you should always consider using multi-queries.

For instance, running the same full-text query but with different sorting or group-by settings will enable searchd to perform expensive full-text search and ranking operation only once, but compute multiple group-by results from its output.

This can be a big saver when you need to display not just plain search results but also some per-category counts, such as the amount of products grouped by vendor. Without multi-query, you would have to run several queries which perform essentially the same search and retrieve the same matches, but create result sets differently. With multi-query, you simply pass all these querys in a single batch and Sphinx optimizes the redundant full-text search internally.

AddQuery() internally saves full current settings state along with the query, and you can safely change them afterwards for subsequent AddQuery() calls. Already added queries will not be affected; there's actually no way to change them at all. Here's an example:

$cl->SetSortMode ( SPH_SORT_RELEVANCE );
$cl->AddQuery ( "hello world", "documents" );

$cl->SetSortMode ( SPH_SORT_ATTR_DESC, "price" );
$cl->AddQuery ( "ipod", "products" );

$cl->AddQuery ( "harry potter", "books" );

$results = $cl->RunQueries ();

With the code above, 1st query will search for "hello world" in "documents" index and sort results by relevance, 2nd query will search for "ipod" in "products" index and sort results by price, and 3rd query will search for "harry potter" in "books" index while still sorting by price. Note that 2nd SetSortMode() call does not affect the first query (because it's already added) but affects both other subsequent queries.

Additionally, any filters set up before an AddQuery() will fall through to subsequent queries. So, if SetFilter() is called before the first query, the same filter will be in place for the second (and subsequent) queries batched through AddQuery() unless you call ResetFilters() first. Alternatively, you can add additional filters as well.

This would also be true for grouping options and sorting options; no current sorting, filtering, and grouping settings are affected by this call; so subsequent queries will reuse current query settings.

AddQuery() returns an index into an array of results that will be returned from RunQueries() call. It is simply a sequentially increasing 0-based integer, ie. first call will return 0, second will return 1, and so on. Just a small helper so you won't have to track the indexes manualy if you need then.

6.6.3. RunQueries

Prototype: function RunQueries ()

Connect to searchd, runs a batch of all queries added using AddQuery(), obtains and returns the result sets. Returns false and sets GetLastError() message on general error (such as network I/O failure). Returns a plain array of result sets on success.

Each result set in the returned array is exactly the same as the result set returned from Query().

Note that the batch query request itself almost always succeds - unless there's a network error, blocking index rotation in progress, or another general failure which prevents the whole request from being processed.

However individual queries within the batch might very well fail. In this case their respective result sets will contain non-empty "error" message, but no matches or query statistics. In the extreme case all queries within the batch could fail. There still will be no general error reported, because API was able to succesfully connect to searchd, submit the batch, and receive the results - but every result set will have a specific error message.

6.6.4. ResetFilters

Prototype: function ResetFilters ()

Clears all currently set filters.

This call is only normally required when using multi-queries. You might want to set different filters for different queries in the batch. To do that, you should call ResetFilters() and add new filters using the respective calls.

6.6.5. ResetGroupBy

Prototype: function ResetGroupBy ()

Clears all currently group-by settings, and disables group-by.

This call is only normally required when using multi-queries. You can change individual group-by settings using SetGroupBy() and SetGroupDistinct() calls, but you can not disable group-by using those calls. ResetGroupBy() fully resets previous group-by settings and disables group-by mode in the current state, so that subsequent AddQuery() calls can perform non-grouping searches.

6.7. Additional functionality

6.7.1. BuildExcerpts

Prototype: function BuildExcerpts ( $docs, $index, $words, $opts=array() )

Excerpts (snippets) builder function. Connects to searchd, asks it to generate excerpts (snippets) from given documents, and returns the results.

$docs is a plain array of strings that carry the documents' contents. $index is an index name string. Different settings (such as charset, morphology, wordforms) from given index will be used. $words is a string that contains the keywords to highlight. They will be processed with respect to index settings. For instance, if English stemming is enabled in the index, "shoes" will be highlighted even if keyword is "shoe". Starting with version 0.9.9, keywords can contain wildcards, that work similarly to star-syntax available in queries. $opts is a hash which contains additional optional highlighting parameters:

"before_match":
A string to insert before a keyword match. Default is "<b>".
"after_match":
A string to insert after a keyword match. Default is "<b>".
"chunk_separator":
A string to insert between snippet chunks (passages). Default is " ... ".
"limit":
Maximum snippet size, in symbols (codepoints). Integer, default is 256.
"around":
How much words to pick around each matching keywords block. Integer, default is 5.
"exact_phrase":
Whether to highlight exact query phrase matches only instead of individual keywords. Boolean, default is false.
"single_passage":
Whether to extract single best passage only. Boolean, default is false.
"weight_order":
Whether to sort the extracted passages in order of relevance (decreasing weight), or in order of appearance in the document (increasing position). Boolean, default is false.

Returns false on failure. Returns a plain array of strings with excerpts (snippets) on success.

6.7.2. UpdateAttributes

Prototype: function UpdateAttributes ( $index, $attrs, $values )

Instantly updates given attribute values in given documents. Returns number of actually updated documents (0 or more) on success, or -1 on failure.

$index is a name of the index (or indexes) to be updated. $attrs is a plain array with string attribute names, listing attributes that are updated. $values is a hash where key is document ID, and value is a plain array of new attribute values.

$index can be either a single index name or a list, like in Query(). Unlike Query(), wildcard is not allowed and all the indexes to update must be specified explicitly. The list of indexes can include distributed index names. Updates on distributed indexes will be pushed to all agents.

The updates only work with docinfo=extern storage strategy. They are very fast because they're working fully in RAM, but they can also be made persistent: updates are saved on disk on clean searchd shutdown initiated by SIGTERM signal. With additional restrictions, updates are also possible on MVA attributes; refer to mva_updates_pool directive for details.

Usage example:

$cl->UpdateAttributes ( "test1", array("group_id"), array(1=>array(456)) );
$cl->UpdateAttributes ( "products", array ( "price", "amount_in_stock" ),
	array ( 1001=>array(123,5), 1002=>array(37,11), 1003=>(25,129) ) );

The first sample statement will update document 1 in index "test1", setting "group_id" to 456. The second one will update documents 1001, 1002 and 1003 in index "products". For document 1001, the new price will be set to 123 and the new amount in stock to 5; for document 1002, the new price will be 37 and the new amount will be 11; etc.

6.7.3. BuildKeywords

Prototype: function BuildKeywords ( $query, $index, $hits )

Extracts keywords from query using tokenizer settings for given index, optionally with per-keyword occurrence statistics. Returns an array of hashes with per-keyword information.

$query is a query to extract keywords from. $index is a name of the index to get tokenizing settings and keyword occurrence statistics from. $hits is a boolean flag that indicates whether keyword occurrence statistics are required.

Usage example:

$keywords = $cl->BuildKeywords ( "this.is.my query", "test1", false );

6.7.4. EscapeString

Prototype: function EscapeString ( $string )

Escapes characters that are treated as special operators by the query language parser. Returns an escaped string.

$string is a string to escape.

This function might seem redundant because it's trivial to implement in any calling application. However, as the set of special characters might change over time, it makes sense to have an API call that is guaranteed to escape all such characters at all times.

Usage example:

$escaped = $cl->EscapeString ( "escaping-sample@query/string" );

6.8. Persistent connections

Persistent connections allow to use single network connection to run multiple commands that would otherwise require reconnects.

6.8.1. Open

Prototype: function Open ()

Opens persistent connection to the server.

6.8.2. Close

Prototype: function Close ()

Closes previously opened persistent connection.

7. MySQL storage engine (SphinxSE)

7.1. SphinxSE overview

SphinxSE is MySQL storage engine which can be compiled into MySQL server 5.x using its pluggable architecure. It is not available for MySQL 4.x series. It also requires MySQL 5.0.22 or higher in 5.0.x series, or MySQL 5.1.12 or higher in 5.1.x series.

Despite the name, SphinxSE does not actually store any data itself. It is actually a built-in client which allows MySQL server to talk to searchd, run search queries, and obtain search results. All indexing and searching happen outside MySQL.

Obvious SphinxSE applications include:

  • easier porting of MySQL FTS applications to Sphinx;
  • allowing Sphinx use with progamming languages for which native APIs are not available yet;
  • optimizations when additional Sphinx result set processing on MySQL side is required (eg. JOINs with original document tables, additional MySQL-side filtering, etc).

7.2. Installing SphinxSE

You will need to obtain a copy of MySQL sources, prepare those, and then recompile MySQL binary. MySQL sources (mysql-5.x.yy.tar.gz) could be obtained from dev.mysql.com Web site.

For some MySQL versions, there are delta tarballs with already prepared source versions available from Sphinx Web site. After unzipping those over original sources MySQL would be ready to be configured and built with Sphinx support.

If such tarball is not available, or does not work for you for any reason, you would have to prepare sources manually. You will need to GNU Autotools framework (autoconf, automake and libtool) installed to do that.

7.2.1. Compiling MySQL 5.0.x with SphinxSE

Skips steps 1-3 if using already prepared delta tarball.

  1. copy sphinx.5.0.yy.diff patch file into MySQL sources directory and run
    patch -p1 < sphinx.5.0.yy.diff
    

    If there's no .diff file exactly for the specific version you need to build, try applying .diff with closest version numbers. It is important that the patch should apply with no rejects.

  2. in MySQL sources directory, run
    sh BUILD/autorun.sh
    
  3. in MySQL sources directory, create sql/sphinx directory in and copy all files in mysqlse directory from Sphinx sources there. Example:
    cp -R /root/builds/sphinx-0.9.7/mysqlse /root/builds/mysql-5.0.24/sql/sphinx
    
  4. configure MySQL and enable Sphinx engine:
    ./configure --with-sphinx-storage-engine
    
  5. build and install MySQL:
    make
    make install
    

7.2.2. Compiling MySQL 5.1.x with SphinxSE

Skip steps 1-2 if using already prepared delta tarball.

  1. in MySQL sources directory, create storage/sphinx directory in and copy all files in mysqlse directory from Sphinx sources there. Example:
    cp -R /root/builds/sphinx-0.9.7/mysqlse /root/builds/mysql-5.1.14/storage/sphinx
    
  2. in MySQL sources directory, run
    sh BUILD/autorun.sh
    
  3. configure MySQL and enable Sphinx engine:
    ./configure --with-plugins=sphinx
    
  4. build and install MySQL:
    make
    make install
    

7.2.3. Checking SphinxSE installation

To check whether SphinxSE has been succesfully compiled into MySQL, launch newly built servers, run mysql client and issue SHOW ENGINES query. You should see a list of all available engines. Sphinx should be present and "Support" column should contain "YES":
     
mysql> show engines;
+------------+----------+----------------------------------------------------------------+
| Engine     | Support  | Comment                                                        |
+------------+----------+----------------------------------------------------------------+
| MyISAM     | DEFAULT  | Default engine as of MySQL 3.23 with great performance         |
  ...
| SPHINX     | YES      | Sphinx storage engine                                          |
  ...
+------------+----------+----------------------------------------------------------------+
13 rows in set (0.00 sec)    

7.3. Using SphinxSE

To search via SphinxSE, you would need to create special ENGINE=SPHINX "search table", and then SELECT from it with full text query put into WHERE clause for query column.

Let's begin with an example create statement and search query:

CREATE TABLE t1
(
    id          INTEGER NOT NULL,
    weight      INTEGER NOT NULL,
    query       VARCHAR(3072) NOT NULL,
    group_id    INTEGER,
    INDEX(query)
) ENGINE=SPHINX CONNECTION="sphinx://localhost:3312/test";

SELECT * FROM t1 WHERE query='test it;mode=any';

First 3 columns of search table must be INTEGER, INTEGER and VARCHAR which will be mapped to document ID, match weight and search query accordingly. Query column must be indexed; all the others must be kept unindexed. Columns' names are ignored so you can use arbitrary ones.

Additional columns must be either INTEGER or TIMESTAMP. They will be bound to attributes provided in Sphinx result set by name, so their names must match attribute names specified in sphinx.conf. If there's no such attribute name in Sphinx search results, column will have NULL values.

Special "virtual" attributes names can also be bound to SphinxSE columns. _sph_ needs to be used instead of @ for that. For instance, to obtain @group and @count virtual attributes, use _sph_group and _sph_count column names.

CONNECTION string parameter can be used to specify default searchd host, port and indexes for queries issued using this table. If no connection string is specified in CREATE TABLE, index name "*" (ie. search all indexes) and localhost:3312 are assumed. Connection string syntax is as follows:

CONNECTION="sphinx://HOST:PORT/INDEXNAME"

You can change the default connection string later:

ALTER TABLE t1 CONNECTION="sphinx://NEWHOST:NEWPORT/NEWINDEXNAME";

You can also override all these parameters per-query.

As seen in example, both query text and search options should be put into WHERE clause on search query column (ie. 3rd column); the options are separated by semicolons; and their names from values by equality sign. Any number of options can be specified. Available options are:

  • query - query text;
  • mode - matching mode. Must be one of "all", "any", "phrase", "boolean", or "extended". Default is "all";
  • sort - match sorting mode. Must be one of "relevance", "attr_desc", "attr_asc", "time_segments", or "extended". In all modes besides "relevance" attribute name (or sorting clause for "extended") is also required after a colon:
    ... WHERE query='test;sort=attr_asc:group_id';
    ... WHERE query='test;sort=extended:@weight desc, group_id asc';
    
  • offset - offset into result set, default is 0;
  • limit - amount of matches to retrieve from result set, default is 20;
  • index - names of the indexes to search:
    ... WHERE query='test;index=test1;';
    ... WHERE query='test;index=test1,test2,test3;';
    
  • minid, maxid - min and max document ID to match;
  • weights - comma-separated list of weights to be assigned to Sphinx full-text fields:
    ... WHERE query='test;weights=1,2,3;';
    
  • filter, !filter - comma-separated attribute name and a set of values to match:
    # only include groups 1, 5 and 19
    ... WHERE query='test;filter=group_id,1,5,19;';
    
    # exclude groups 3 and 11
    ... WHERE query='test;!filter=group_id,3,11;';
    
  • range, !range - comma-separated attribute name, min and max value to match:
    # include groups from 3 to 7, inclusive
    ... WHERE query='test;range=group_id,3,7;';
    
    # exclude groups from 5 to 25
    ... WHERE query='test;!range=group_id,5,25;';
    
  • maxmatches - per-query max matches value:
    ... WHERE query='test;maxmatches=2000;';
    
  • groupby - group-by function and attribute:
    ... WHERE query='test;groupby=day:published_ts;';
    ... WHERE query='test;groupby=attr:group_id;';
    
  • groupsort - group-by sorting clause:
    ... WHERE query='test;groupsort=@count desc;';
    
  • indexweights - comma-separated list of index names and weights to use when searching through several indexes:
    ... WHERE query='test;indexweights=idx_exact,2,idx_stemmed,1;';
    

One very important note that it is much more efficient to allow Sphinx to perform sorting, filtering and slicing the result set than to raise max matches count and use WHERE, ORDER BY and LIMIT clauses on MySQL side. This is for two reasons. First, Sphinx does a number of optimizations and performs better than MySQL on these tasks. Second, less data would need to be packed by searchd, transferred and unpacked by SphinxSE.

Starting with version 0.9.9, additional query info besides result set could be retrieved with SHOW ENGINE SPHINX STATUS statement:

mysql> SHOW ENGINE SPHINX STATUS;
+--------+-------+-------------------------------------------------+
| Type   | Name  | Status                                          |
+--------+-------+-------------------------------------------------+
| SPHINX | stats | total: 25, total found: 25, time: 126, words: 2 | 
| SPHINX | words | sphinx:591:1256 soft:11076:15945                | 
+--------+-------+-------------------------------------------------+
2 rows in set (0.00 sec)

This information can also be accessed through status variables. Note that this method does not require super-user privileges.

mysql> SHOW STATUS LIKE 'sphinx_%';
+--------------------+----------------------------------+
| Variable_name      | Value                            |
+--------------------+----------------------------------+
| sphinx_total       | 25                               | 
| sphinx_total_found | 25                               | 
| sphinx_time        | 126                              | 
| sphinx_word_count  | 2                                | 
| sphinx_words       | sphinx:591:1256 soft:11076:15945 | 
+--------------------+----------------------------------+
5 rows in set (0.00 sec)

You could perform JOINs on SphinxSE search table and tables using other engines. Here's an example with "documents" from example.sql:

mysql> SELECT content, date_added FROM test.documents docs
-> JOIN t1 ON (docs.id=t1.id) 
-> WHERE query="one document;mode=any";
+-------------------------------------+---------------------+
| content                             | docdate             |
+-------------------------------------+---------------------+
| this is my test document number two | 2006-06-17 14:04:28 | 
| this is my test document number one | 2006-06-17 14:04:28 | 
+-------------------------------------+---------------------+
2 rows in set (0.00 sec)

mysql> SHOW ENGINE SPHINX STATUS;
+--------+-------+---------------------------------------------+
| Type   | Name  | Status                                      |
+--------+-------+---------------------------------------------+
| SPHINX | stats | total: 2, total found: 2, time: 0, words: 2 | 
| SPHINX | words | one:1:2 document:2:2                        | 
+--------+-------+---------------------------------------------+
2 rows in set (0.00 sec)

8. Reporting bugs

Unfortunately, Sphinx is not yet 100% bug free (even though I'm working hard towards that), so you might occasionally run into some issues.

Reporting as much as possible about each bug is very important - because to fix it, I need to be able either to reproduce and debug the bug, or to deduce what's causing it from the information that you provide. So here are some instructions on how to do that.

Build-time issues

If Sphinx fails to build for some reason, please do the following:

  1. check that headers and libraries for your DBMS are properly installed (for instance, check that mysql-devel package is present);
  2. report Sphinx version and config file (be sure to remove the passwords!), MySQL (or PostgreSQL) configuration info, gcc version, OS version and CPU type (ie. x86, x86-64, PowerPC, etc):
    mysql_config
    gcc --version
    uname -a
    
  3. report the error message which is produced by configure or gcc (it should be to include error message itself only, not the whole build log).

Run-time issues

If Sphinx builds and runs, but there are any problems running it, please do the following:

  1. describe the bug (ie. both the expected behavior and actual behavior) and all the steps necessary to reproduce it;
  2. include Sphinx version and config file (be sure to remove the passwords!), MySQL (or PostgreSQL) version, gcc version, OS version and CPU type (ie. x86, x86-64, PowerPC, etc):
    mysql --version
    gcc --version
    uname -a
    
  3. build, install and run debug versions of all Sphinx programs (this is to enable a lot of additional internal checks, so-called assertions):
    make distclean
    ./configure --with-debug
    make install
    killall -TERM searchd
    
  4. reindex to check if any assertions are triggered (in this case, it's likely that the index is corrupted and causing problems);
  5. if the bug does not reproduce with debug versions, revert to non-debug and mention it in your report;
  6. if the bug could be easily reproduced with a small (1-100 record) part of your database, please provide a gzipped dump of that part;
  7. if the problem is related to searchd, include relevant entries from searchd.log and query.log in your bug report;
  8. if the problem is related to searchd, try running it in console mode and check if it dies with an assertion:
    ./searchd --console
    
  9. if any program dies with an assertion, provide the assertion message.

Debugging assertions, crashes and hangups

If any program dies with an assertion, crashes without an assertion or hangs up, you would additionally need to generate a core dump and examine it.

  1. enable core dumps. On most Linux systems, this is done using ulimit:
    ulimit -c 32768
    
  2. run the program and try to reproduce the bug;
  3. if the program crashes (either with or without an assertion), find the core file in current directory (it should typically print out "Segmentation fault (core dumped)" message);
  4. if the program hangs, use kill -SEGV from another console to force it to exit and dump core:
    kill -SEGV HANGED-PROCESS-ID
    
  5. use gdb to examine the core file and obtain a backtrace:
    gdb ./CRASHED-PROGRAM-FILE-NAME CORE-DUMP-FILE-NAME
    (gdb) bt
    (gdb) quit
    

Note that HANGED-PROCESS-ID, CRASHED-PROGRAM-FILE-NAME and CORE-DUMP-FILE-NAME must all be replaced with specific numbers and file names. For example, hanged searchd debugging session would look like:

# kill -SEGV 12345
# ls *core*
core.12345
# gdb ./searchd core.12345
(gdb) bt
...
(gdb) quit

Note that ulimit is not server-wide and only affects current shell session. This means that you will not have to restore any server-wide limits - but if you relogin, you will have to set ulimit again.

Core dumps should be placed in current working directory (and Sphinx programs do not change it), so this is where you would look for them.

Please do not immediately remove the core file because there could be additional helpful information which could be retrieved from it. You do not need to send me this file (as the debug info there is closely tied to your system) but I might need to ask you a few additional questions about it.

9. sphinx.conf options reference

9.1. Data source configuration options

9.1.1. type

Data source type. Mandatory, no default value. Known types are mysql, pgsql, mssql, xmlpipe and xmlpipe2.

All other per-source options depend on source type selected by this option. Names of the options used for SQL sources (ie. MySQL, PostgreSQL, MS SQL) start with "sql_"; names of the ones used for xmlpipe and xmlpipe2 start with "xmlpipe_". All source types except xmlpipe are conditional; they might or might not be supported depending on your build settings, installed client libraries, etc. mssql type is currently only available on Windows.

Example:
type = mysql

9.1.2. sql_host

SQL server host to connect to. Mandatory, no default value. Applies to SQL source types (mysql, pgsql, mssql) only.

In the simplest case when Sphinx resides on the same host with your MySQL or PostgreSQL installation, you would simply specify "localhost". Note that MySQL client library chooses whether to connect over TCP/IP or over UNIX socket based on the host name. Generally speaking, "localhost" will force it to use UNIX socket (this is the default and generally recommended mode) and "127.0.0.1" will force TCP/IP usage. Refer to MySQL manual for more details.

Example:
sql_host = localhost

9.1.3. sql_port

SQL server IP port to connect to. Optional, default is 3306 for mysql source type and 5432 for pgsql type. Applies to SQL source types (mysql, pgsql, mssql) only. Note that it depends on sql_host setting whether this value will actually be used.

Example:
sql_port = 3306

9.1.4. sql_user

SQL user to use when connecting to sql_host. Mandatory, no default value. Applies to SQL source types (mysql, pgsql, mssql) only.

Example:
sql_user = test

9.1.5. sql_pass

SQL user password to use when connecting to sql_host. Mandatory, no default value. Applies to SQL source types (mysql, pgsql, mssql) only.

Example:
sql_pass = mysecretpassword

9.1.6. sql_db

SQL database (in MySQL terms) to use after the connection and perform further queries within. Mandatory, no default value. Applies to SQL source types (mysql, pgsql, mssql) only.

Example:
sql_db = test

9.1.7. sql_sock

UNIX socket name to connect to for local SQL servers. Optional, default value is empty (use client library default settings). Applies to SQL source types (mysql, pgsql, mssql) only.

On Linux, it would typically be /var/lib/mysql/mysql.sock. On FreeBSD, it would typically be /tmp/mysql.sock. Note that it depends on sql_host setting whether this value will actually be used.

Example:
sql_sock = /tmp/mysql.sock

9.1.8. mysql_connect_flags

MySQL client connection flags. Optional, default value is 0 (do not set any flags). Applies to mysql source type only.

This option must contain an integer value with the sum of the flags. The value will be passed to mysql_real_connect() verbatim. The flags are enumerated in mysql_com.h include file. Flags that are especially interesting in regard to indexing, with their respective values, are as follows:

  • CLIENT_COMPRESS = 32; can use compression protocol
  • CLIENT_SSL = 2048; switch to SSL after handshake
  • CLIENT_SECURE_CONNECTION = 32768; new 4.1 authentication

For instance, you can specify 2080 (2048+32) to use both compression and SSL, or 32768 to use new authentication only. Initially, this option was introduced to be able to use compression when the indexer and mysqld are on different hosts. Compression on 1 Gbps links is most likely to hurt indexing time though it reduces network traffic, both in theory and in practice. However, enabling compression on 100 Mbps links may improve indexing time significantly (upto 20-30% of the total indexing time improvement was reported). Your mileage may vary.

Example:
mysql_connect_flags = 32 # enable compression

9.1.9. sql_query_pre

Pre-fetch query, or pre-query. Multi-value, optional, default is empty list of queries. Applies to SQL source types (mysql, pgsql, mssql) only.

Multi-value means that you can specify several pre-queries. They are executed before the main fetch query, and they will be exectued exactly in order of appeareance in the configuration file. Pre-query results are ignored.

Pre-queries are useful in a lot of ways. They are used to setup encoding, mark records that are going to be indexed, update internal counters, set various per-connection SQL server options and variables, and so on.

Perhaps the most frequent pre-query usage is to specify the encoding that the server will use for the rows it returnes. It must match the encoding that Sphinx expects (as specified by charset_type and charset_table options). Two MySQL specific examples of setting the encoding are:

sql_query_pre = SET CHARACTER_SET_RESULTS=cp1251
sql_query_pre = SET NAMES utf8

Also specific to MySQL sources, it is useful to disable query cache (for indexer connection only) in pre-query, because indexing queries are not going to be re-run frequently anyway, and there's no sense in caching their results. That could be achieved with:

sql_query_pre = SET SESSION query_cache_type=OFF

Example:
sql_query_pre = SET NAMES utf8
sql_query_pre = SET SESSION query_cache_type=OFF

9.1.10. sql_query

Main document fetch query. Mandatory, no default value. Applies to SQL source types (mysql, pgsql, mssql) only.

There can be only one main query. This is the query which is used to retrieve documents from SQL server. You can specify up to 32 full-text fields (formally, upto SPH_MAX_FIELDS from sphinx.h), and an arbitrary amount of attributes. All of the columns that are neither document ID (the first one) nor attributes will be full-text indexed.

Document ID MUST be the very first field, and it MUST BE UNIQUE UNSIGNED POSITIVE (NON-ZERO, NON-NEGATIVE) INTEGER NUMBER. It can be either 32-bit or 64-bit, depending on how you built Sphinx; by default it builds with 32-bit IDs support but --enable-id64 option to configure allows to build with 64-bit document and word IDs support.

Example:
sql_query = \
	SELECT id, group_id, UNIX_TIMESTAMP(date_added) AS date_added, \
		title, content \
	FROM documents

9.1.11. sql_query_range

Range query setup. Optional, default is empty. Applies to SQL source types (mysql, pgsql, mssql) only.

Setting this option enables ranged document fetch queries (see Section 3.7, “Ranged queries”). Ranged queries are useful to avoid notorious MyISAM table locks when indexing lots of data. (They also help with other less notorious issues, such as reduced performance caused by big result sets, or additional resources consumed by InnoDB to serialize big read transactions.)

The query specified in this option must fetch min and max document IDs that will be used as range boundaries. It must return exactly two integer fields, min ID first and max ID second; the field names are ignored.

When ranged queries are enabled, sql_query will be required to contain $start and $end macros (because it obviously would be a mistake to index the whole table many times over). Note that the intervals specified by $start..$end will not overlap, so you should not remove document IDs that are exactly equal to $start or $end from your query. The example in Section 3.7, “Ranged queries”) illustrates that; note how it uses greater-or-equal and less-or-equal comparisons.

Example:
sql_query_range = SELECT MIN(id),MAX(id) FROM documents

9.1.12. sql_range_step

Range query step. Optional, default is 1024. Applies to SQL source types (mysql, pgsql, mssql) only.

Only used when ranged queries are enabled. The full document IDs interval fetched by sql_query_range will be walked in this big steps. For example, if min and max IDs fetched are 12 and 3456 respectively, and the step is 1000, indexer will call sql_query several times with the following substitutions:

  • $start=12, $end=1011
  • $start=1012, $end=2011
  • $start=2012, $end=3011
  • $start=3012, $end=3456

Example:
sql_range_step = 1000

9.1.13. sql_query_killlist

Kill-list query. Optional, default is empty (no query). Applies to SQL source types (mysql, pgsql, mssql) only. Introduced in version 0.9.9.

This query is expected to return a number of 1-column rows, each containing just the document ID. The returned document IDs are stored within an index. Kill-list for a given index suppresses results from other indexes, depending on index order in the query. The intended use is to help implement deletions and updates on existing indexes without rebuilding (actually even touching them), and especially to fight phantom results problem.

Let us dissect an example. Assume we have two indexes, 'main' and 'delta'. Assume that documents 2, 3, and 5 were deleted since last reindex of 'main', and documents 7 and 11 were updated (ie. their text contents were changed). Assume that a keyword 'test' occurred in all these mentioned documents when we were indexing 'main'; still occurs in document 7 as we index 'delta'; but does not occur in document 11 any more. We now reindex delta and then search through both these indexes in proper (least to most recent) order:

$res = $cl->Query ( "test", "main delta" );

First, we need to properly handle deletions. The result set should not contain documents 2, 3, or 5. Second, we also need to avoid phantom results. Unless we do something about it, document 11 will appear in search results! It will be found in 'main' (but not 'delta'). And it will make it to the final result set unless something stops it.

Kill-list, or K-list for short, is that something. Kill-list attached to 'delta' will suppress the specified rows from all the preceding indexes, in this case just 'main'. So to get the expected results, we should put all the updated and deleted document IDs into it.

Example:
sql_query_killlist = \
	SELECT id FROM documents WHERE updated_ts>=@last_reindex UNION \
	SELECT id FROM documents_deleted WHERE deleted_ts>=@last_reindex

9.1.14. sql_attr_uint

Unsigned integer attribute declaration. Multi-value (there might be multiple attributes declared), optional. Applies to SQL source types (mysql, pgsql, mssql) only.

The column value should fit into 32-bit unsigned integer range. Values outside this range will be accepted but wrapped around. For instance, -1 will be wrapped around to 2^32-1 or 4,294,967,295.

You can specify bit count for integer attributes by appending ':BITCOUNT' to attribute name (see example below). Attributes with less than default 32-bit size, or bitfields, perform slower. But they require less RAM when using extern storage: such bitfields are packed together in 32-bit chunks in .spa attribute data file. Bit size settings are ignored if using inline storage.

Example:
sql_attr_uint = group_id
sql_attr_uint = forum_id:9 # 9 bits for forum_id

9.1.15. sql_attr_bool

Boolean attribute declaration. Multi-value (there might be multiple attributes declared), optional. Applies to SQL source types (mysql, pgsql, mssql) only. Equivalent to sql_attr_uint declaration with a bit count of 1.

Example:
sql_attr_bool = is_deleted # will be packed to 1 bit

9.1.16. sql_attr_bigint

64-bit signed integer attribute declaration. Multi-value (there might be multiple attributes declared), optional. Applies to SQL source types (mysql, pgsql, mssql) only. Note that unlike sql_attr_uint, these values are signed. Introduced in version 0.9.9.

Example:
sql_attr_bigint = my_bigint_id

9.1.17. sql_attr_timestamp

UNIX timestamp attribute declaration. Multi-value (there might be multiple attributes declared), optional. Applies to SQL source types (mysql, pgsql, mssql) only.

The column value should be a timestamp in UNIX format, ie. 32-bit unsigned integer number of seconds elapsed since midnight, January 01, 1970, GMT. Timestamps are internally stored and handled as integers everywhere. But in addition to working with timestamps as integers, it's also legal to use them along with different date-based functions - such as time segments sorting mode, or day/week/month/year extraction for GROUP BY. Note that DATE or DATETIME column types in MySQL can not be directly used as timestamps; you need to explicitly convert such columns using UNIX_TIMESTAMP function.

Example:
sql_attr_timestamp = UNIX_TIMESTAMP(added_datetime) AS added_ts

9.1.18. sql_attr_str2ordinal

Ordinal string number attribute declaration. Multi-value (there might be multiple attributes declared), optional. Applies to SQL source types (mysql, pgsql, mssql) only.

This attribute type (so-called ordinal, for brevity) is intended to allow sorting by string values, but without storing the strings themselves. When indexing ordinals, string values are fetched from database, temporarily stored, sorted, and then replaced by their respective ordinal numbers in the array of sorted strings. So, the ordinal number is an integer such that sorting by it produces the same result as if lexicographically sorting by original strings. by string values lexicographically.

Earlier versions could consume a lot of RAM for indexing ordinals. Starting with revision r1112, ordinals accumulation and sorting also runs in fixed memory (at the cost of using additional temporary disk space), and honors mem_limit settings.

Ideally the strings should be sorted differently, depending on the encoding and locale. For instance, if the strings are known to be Russian text in KOI8R encoding, sorting the bytes 0xE0, 0xE1, and 0xE2 should produce 0xE1, 0xE2 and 0xE0, because in KOI8R value 0xE0 encodes a character that is (noticeably) after characters encoded by 0xE1 and 0xE2. Unfortunately, Sphinx does not support that at the moment and will simply sort the strings bytewise.

Note that the ordinals are by construction local to each index, and it's therefore impossible to merge ordinals while retaining the proper order. The processed strings are replaced by their sequential number in the index they occurred in, but different indexes have different sets of strings. For instance, if 'main' index contains strings "aaa", "bbb", "ccc", and so on up to "zzz", they'll be assigned numbers 1, 2, 3, and so on up to 26, respectively. But then if 'delta' only contains "zzz" the assigned number will be 1. And after the merge, the order will be broken. Unfortunately, this is impossible to workaround without storing the original strings (and once Sphinx supports storing the original strings, ordinals will not be necessary any more).

Example:
sql_attr_str2ordinal = author_name

9.1.19. sql_attr_float

Floating point attribute declaration. Multi-value (there might be multiple attributes declared), optional. Applies to SQL source types (mysql, pgsql, mssql) only.

The values will be stored in single precision, 32-bit IEEE 754 format. Represented range is approximately from 1e-38 to 1e+38. The amount of decimal digits that can be stored precisely is approximately 7. One important usage of the float attributes is storing latitude and longitude values (in radians), for further usage in query-time geosphere distance calculations.

Example:
sql_attr_float = lat_radians
sql_attr_float = long_radians

9.1.20. sql_attr_multi

Multi-valued attribute (MVA) declaration. Multi-value (ie. there may be more than one such attribute declared), optional. Applies to SQL source types (mysql, pgsql, mssql) only.

Plain attributes only allow to attach 1 value per each document. However, there are cases (such as tags or categories) when it is desired to attach multiple values of the same attribute and be able to apply filtering or grouping to value lists.

The declaration format is as follows (backslashes are for clarity only; everything can be declared in a single line as well):

sql_attr_multi = ATTR-TYPE ATTR-NAME 'from' SOURCE-TYPE \
	[;QUERY] \
	[;RANGE-QUERY]

where

  • ATTR-TYPE is 'uint' or 'timestamp'
  • SOURCE-TYPE is 'field', 'query', or 'ranged-query'
  • QUERY is SQL query used to fetch all ( docid, attrvalue ) pairs
  • RANGE-QUERY is SQL query used to fetch min and max ID values, similar to 'sql_query_range'

Example:
sql_attr_multi = uint tag from query; SELECT id, tag FROM tags
sql_attr_multi = uint tag from ranged-query; \
	SELECT id, tag FROM tags WHERE id>=$start AND id<=$end; \
	SELECT MIN(id), MAX(id) FROM tags

9.1.21. sql_query_post

Post-fetch query. Optional, default value is empty. Applies to SQL source types (mysql, pgsql, mssql) only.

This query is executed immediately after sql_query completes successfully. When post-fetch query produces errors, they are reported as warnings, but indexing is not terminated. It's result set is ignored. Note that indexing is not yet completed at the point when this query gets executed, and further indexing still may fail. Therefore, any permanent updates should not be done from here. For instance, updates on helper table that permanently change the last successfully indexed ID should not be run from post-fetch query; they should be run from post-index query instead.

Example:
sql_query_post = DROP TABLE my_tmp_table

9.1.22. sql_query_post_index

Post-index query. Optional, default value is empty. Applies to SQL source types (mysql, pgsql, mssql) only.

This query is executed when indexing is fully and succesfully completed. If this query produces errors, they are reported as warnings, but indexing is not terminated. It's result set is ignored. $maxid macro can be used in its text; it will be expanded to maximum document ID which was actually fetched from the database during indexing.

Example:
sql_query_post_index = REPLACE INTO counters ( id, val ) \
    VALUES ( 'max_indexed_id', $maxid )

9.1.23. sql_ranged_throttle

Ranged query throttling period, in milliseconds. Optional, default is 0 (no throttling). Applies to SQL source types (mysql, pgsql, mssql) only.

Throttling can be useful when indexer imposes too much load on the database server. It causes the indexer to sleep for given amount of milliseconds once per each ranged query step. This sleep is unconditional, and is performed before the fetch query.

Example:
sql_ranged_throttle = 1000 # sleep for 1 sec before each query step

9.1.24. sql_query_info

Document info query. Optional, default is empty. Applies to mysql source type only.

Only used by CLI search to fetch and display document information, only works with MySQL at the moment, and only intended for debugging purposes. This query fetches the row that will be displayed by CLI search utility for each document ID. It is required to contain $id macro that expands to the queried document ID.

Example:
sql_query_info = SELECT * FROM documents WHERE id=$id

9.1.25. xmlpipe_command

Shell command that invokes xmlpipe stream producer. Mandatory. Applies to xmlpipe and xmlpipe2 source types only.

Specifies a command that will be executed and which output will be parsed for documents. Refer to Section 3.8, “xmlpipe data source” or Section 3.9, “xmlpipe2 data source” for specific format description.

Example:
xmlpipe_command = cat /home/sphinx/test.xml

9.1.26. xmlpipe_field

xmlpipe field declaration. Multi-value, optional. Applies to xmlpipe2 source type only. Refer to Section 3.9, “xmlpipe2 data source”.

Example:
xmlpipe_field = subject
xmlpipe_field = content

9.1.27. xmlpipe_attr_uint

xmlpipe integer attribute declaration. Multi-value, optional. Applies to xmlpipe2 source type only. Syntax fully matches that of sql_attr_uint.

Example:
xmlpipe_attr_uint = author

9.1.28. xmlpipe_attr_bool

xmlpipe boolean attribute declaration. Multi-value, optional. Applies to xmlpipe2 source type only. Syntax fully matches that of sql_attr_bool.

Example:
xmlpipe_attr_bool = is_deleted # will be packed to 1 bit

9.1.29. xmlpipe_attr_timestamp

xmlpipe UNIX timestamp attribute declaration. Multi-value, optional. Applies to xmlpipe2 source type only. Syntax fully matches that of sql_attr_timestamp.

Example:
xmlpipe_attr_timestamp = published

9.1.30. xmlpipe_attr_str2ordinal

xmlpipe string ordinal attribute declaration. Multi-value, optional. Applies to xmlpipe2 source type only. Syntax fully matches that of sql_attr_str2ordinal.

Example:
xmlpipe_attr_str2ordinal = author_sort

9.1.31. xmlpipe_attr_float

xmlpipe floating point attribute declaration. Multi-value, optional. Applies to xmlpipe2 source type only. Syntax fully matches that of sql_attr_float.

Example:
xmlpipe_attr_float = lat_radians
xmlpipe_attr_float = long_radians

9.1.32. xmlpipe_attr_multi

xmlpipe MVA attribute declaration. Multi-value, optional. Applies to xmlpipe2 source type only.

This setting declares an MVA attribute tag in xmlpipe2 stream. The contents of the specified tag will be parsed and a list of integers that will constitute the MVA will be extracted, similar to how sql_attr_multi parses SQL column contents when 'field' MVA source type is specified.

Example:
xmlpipe_attr_multi = taglist

9.1.33. mssql_winauth

MS SQL Windows authentication flag. Boolean, optional, default value is 0 (false). Applies to mssql source type only. Introduced in version 0.9.9.

Whether to use currently logged in Windows account credentials for authentication when connecting to MS SQL Server. Note that when running searchd as a service, account user can differ from the account you used to install the service.

Example:
mssql_winauth = 1

9.1.34. mssql_unicode

MS SQL encoding type flag. Boolean, optional, default value is 0 (false). Applies to mssql source type only. Introduced in version 0.9.9.

Whether to ask for Unicode or single-byte data when querying MS SQL Server. This flag must be in sync with charset_type directive; that is, to index Unicode data, you must set both charset_type in the index (to 'utf-8') and mssql_unicode in the source (to 1). For reference, MS SQL will actually return data in UCS-2 encoding instead of UTF-8, but Sphinx will automatically handle that.

Example:
mssql_unicode = 1

9.1.35. unpack_zlib

Columns to unpack using zlib (aka deflate, aka gunzip). Multi-value, optional, default value is empty list of columns. Applies to SQL source types (mysql, pgsql, mssql) only. Introduced in version 0.9.9.

Columns specified using this directive will be unpacked by indexer using standard zlib algorithm (called deflate and also implemented by gunzip). When indexing on a different box than the database, this lets you offload the database, and save on network traffic. The feature is only available if zlib and zlib-devel were both available during build time.

Example:
unpack_zlib = col1
unpack_zlib = col2

9.1.36. unpack_mysqlcompress

Columns to unpack using MySQL UNCOMPRESS() algorithm. Multi-value, optional, default value is empty list of columns. Applies to SQL source types (mysql, pgsql, mssql) only. Introduced in version 0.9.9.

Columns specified using this directive will be unpacked by indexer using modified zlib algorithm used by MySQL COMPRESS() and UNCOMPRESS() functions. When indexing on a different box than the database, this lets you offload the database, and save on network traffic. The feature is only available if zlib and zlib-devel were both available during build time.

Example:
unpack_mysqlcompress = body_compressed
unpack_mysqlcompress = description_compressed

9.1.37. unpack_mysqlcompress_maxsize

Buffer size for UNCOMPRESS()ed data. Optional, default value is 16M. Introduced in version 0.9.9.

When using unpack_mysqlcompress, due to implementation intrincacies it is not possible to deduce the required buffer size from the compressed data. So the buffer must be preallocated in advance, and unpacked data can not go over the buffer size. This option lets you control the buffer size, both to limit indexer memory use, and to enable unpacking of really long data fields if necessary.

Example:
unpack_mysqlcompress_maxsize = 1M

9.2. Index configuration options

9.2.1. type

Index type. Optional, default is empty (index is plain local index). Known values are empty string or 'distributed'.

Sphinx supports two different types of indexes: local, that are stored and processed on the local machine; and distributed, that involve not only local searching but querying remote searchd instances over the network as well. Index type settings lets you choose this type. By default, indexes are local. Specifying 'distributed' for type enables distributed searching, see Section 4.7, “Distributed searching”.

Example:
type = distributed

9.2.2. source

Adds document source to local index. Multi-value, mandatory.

Specifies document source to get documents from when the current index is indexed. There must be at least one source. There may be multiple sources, without any restrictions on the source types: ie. you can pull part of the data from MySQL server, part from PostgreSQL, part from the filesystem using xmlpipe2 wrapper.

However, there are some restrictions on the source data. First, document IDs must be globally unique across all sources. If that condition is not met, you might get unexpected search results. Second, source schemas must be the same in order to be stored within the same index.

No source ID is stored automatically. Therefore, in order to be able to tell what source the matched document came from, you will need to store some additional information yourself. Two typical approaches include:

  1. mangling document ID and encoding source ID in it:
    source src1
    {
    	sql_query = SELECT id*10+1, ... FROM table1
    	...
    }
    
    source src2
    {
    	sql_query = SELECT id*10+2, ... FROM table2
    	...
    }
    
  2. storing source ID simply as an attribute:
    source src1
    {
    	sql_query = SELECT id, 1 AS source_id FROM table1
    	sql_attr_uint = source_id
    	...
    }
    
    source src2
    {
    	sql_query = SELECT id, 2 AS source_id FROM table2
    	sql_attr_uint = source_id
    	...
    }
    

Example:
source = srcpart1
source = srcpart2
source = srcpart3

9.2.3. path

Index files path and file name (without extension). Mandatory.

Path specifies both directory and file name, but without extension. indexer will append different extensions to this path when generating final names for both permanent and temporary index files. Permanent data files have several different extensions starting with '.sp'; temporary files' extensions start with '.tmp'. It's safe to remove .tmp* files is if indexer fails to remove them automatically.

For reference, different index files store the following data:

  • .spa stores document attributes (used in extern docinfo storage mode only);
  • .spd stores matching document ID lists for each word ID;
  • .sph stores index header information;
  • .spi stores word lists (word IDs and pointers to .spd file);
  • .spm stores MVA data;
  • .spp stores hit (aka posting, aka word occurence) lists for each word ID.

Example:
path = /var/data/test1

9.2.4. docinfo

Document attribute values (docinfo) storage mode. Optional, default is 'extern'. Known values are 'none', 'extern' and 'inline'.

Docinfo storage mode defines how exactly docinfo will be physically stored on disk and RAM. "none" means that there will be no docinfo at all (ie. no attributes). Normally you need not to set "none" explicitly because Sphinx will automatically select "none" when there are no attributes configured. "inline" means that the docinfo will be stored in the .spd file, along with the document ID lists. "extern" means that the docinfo will be stored separately (externally) from document ID lists, in a special .spa file.

Basically, externally stored docinfo must be kept in RAM when querying. for performance reasons. So in some cases "inline" might be the only option. However, such cases are infrequent, and docinfo defaults to "extern". Refer to Section 3.2, “Attributes” for in-depth discussion and RAM usage estimates.

Example:
docinfo = inline

9.2.5. mlock

Memory locking for cached data. Optional, default is 0 (do not call mlock()).

For search performance, searchd preloads a copy of .spa and .spi files in RAM, and keeps that copy in RAM at all times. But if there are no searches on the index for some time, there are no accesses to that cached copy, and OS might decide to swap it out to disk. First queries to such "cooled down" index will cause swap-in and their latency will suffer.

Setting mlock option to 1 makes Sphinx lock physical RAM used for that cached data using mlock(2) system call, and that prevents swapping (see man 2 mlock for details). mlock(2) is a privileged call, so it will require searchd to be either run from root account, or be granted enough privileges otherwise. If mlock() fails, a warning is emitted, but index continues working.

Example:
mlock = 1

9.2.6. morphology

A list of morphology preprocessors to apply. Optional, default is empty (do not apply any preprocessor).

Morphology preprocessors can be applied to the words being indexed to replace different forms of the same word with the base, normalized form. For instance, English stemmer will normalize both "dogs" and "dog" to "dog", making search results for both searches the same.

Built-in preprocessors include English stemmer, Russian stemmer (that supports UTF-8 and Windows-1251 encodings), Soundex, and Metaphone. The latter two replace the words with special phonetic codes that are equal is words are phonetically close. Additional stemmers provided by Snowball project libstemmer library can be enabled at compile time using --with-libstemmer configure option. Built-in English and Russian stemmers should be faster than their libstemmer counterparts, but can produce slightly different results, because they are based on an older version. Metaphone implementation is based on Double Metaphone algorithm and indexes the primary code.

Built-in values that be used in morphology option are: 'none', 'stem_en', 'stem_ru', 'stem_enru', 'soundex', and 'metaphone'. Additional values provided by libstemmer are in 'libstemmer_XXX' format, where XXX is libstemmer algorithm codename (refer to libstemmer_c/libstemmer/modules.txt for a complete list).

Several stemmers can be specified (comma-separated). They will be applied to incoming words in the order they are listed, and the processing will stop once one of the stemmers actually modifies the word. Also when wordforms feature is enabled the word will be looked up in word forms dictionary first, and if there is a matching entry in the dictionary, stemmers will not be applied at all. Or in other words, wordforms can be used to implement stemming exceptions.

Example:
morphology = stem_en, libstemmer_sv

9.2.7. min_stemming_len

Minimum word length at which to enable stemming. Optional, default is 1 (stem everything). Introduced in version 0.9.9.

Stemmers are not perfect, and might sometimes produce undesired results. For instance, running "gps" keyword through Porter stemmer for English results in "gp", which is not really the intent. min_stemming_len feature lets you suppress stemming based on the source word length, ie. to avoid stemming too short words. Keywords that are shorter than the given threshold will not be stemmed. Note that keywords that are exactly as long as specified will be stemmed. So in order to avoid stemming 3-character keywords, you should specify 4 for the value. For more finely grained control, refer to wordforms feature.

Example:
min_stemming_len = 4

9.2.8. stopwords

Stopword files list (space separated). Optional, default is empty.

Stopwords are the words that will not be indexed. Typically you'd put most frequent words in the stopwords list because they do not add much value to search results but consume a lot of resources to process.

You can specify several file names, separated by spaces. All the files will be loaded. Stopwords file format is simple plain text. The encoding must match index encoding specified in charset_type. File data will be tokenized with respect to charset_table settings, so you can use the same separators as in the indexed data. The stemmers will also be applied when parsing stopwords file.

While stopwords are not indexed, they still do affect the keyword positions. For instance, assume that "the" is a stopword, that document 1 contains the line "in office", and that document 2 contains "in the office". Searching for "in office" as for exact phrase will only return the first document, as expected, even though "the" in the second one is stopped.

Example:
stopwords = /usr/local/sphinx/data/stopwords.txt
stopwords = stopwords-ru.txt stopwords-en.txt

9.2.9. wordforms

Word forms dictionary. Optional, default is empty.

Word forms are applied after tokenizing the incoming text by charset_table rules. They essentialy let you replace one word with another. Normally, that would be used to bring different word forms to a single normal form (eg. to normalize all the variants such as "walks", "walked", "walking" to the normal form "walk"). It can also be used to implement stemming exceptions, because stemming is not applied to words found in the forms list.

Dictionaries are used to normalize incoming words both during indexing and searching. Therefore, to pick up changes in wordforms file it's required to reindex and restart searchd.

Word forms support in Sphinx is designed to support big dictionaries well. They moderately affect indexing speed: for instance, a dictionary with 1 million entries slows down indexing about 1.5 times. Searching speed is not affected at all. Additional RAM impact is roughly equal to the dictionary file size, and dictionaries are shared across indexes: ie. if the very same 50 MB wordforms file is specified for 10 different indexes, additional searchd RAM usage will be about 50 MB.

Dictionary file should be in a simple plain text format. Each line should contain source and destination word forms, in exactly the same encoding as specified in charset_type, separated by "greater" sign. Rules from the charset_table will be applied when the file is loaded. So basically it's as case sensitive as your other full-text indexed data, ie. typically case insensitive. Here's the file contents sample:

walks > walk
walked > walk
walking > walk

There is bundled spelldump utility that helps you create a dictionary file in the format Sphinx can read from source .dict and .aff dictionary files in ispell format.

Starting with version 0.9.9, you can map several source words to a single destination word. Because the work happens on tokens, not the source text, differences in whitespace and markup are ignored.

core 2 duo > c2d
e6600 > c2d
core 2duo > c2d

Example:
wordforms = /usr/local/sphinx/data/wordforms.txt

9.2.10. exceptions

Tokenizing exceptions file. Optional, default is empty.

Exceptions allow to map one or more tokens (including tokens with characters that would normally be excluded) to a single keyword. They are similar to wordforms in that they also perform mapping, but have a number of important differences.

Short summary of the differences is as follows:

  • exceptions are case sensitive, wordforms are not;
  • exceptions allow to detect sequences of tokens, wordforms work with single words only;
  • exceptions can use special characters that are not in charset_table, wordforms fully obey charset_table;
  • exceptions can underperform on huge dictionaries, wordforms handle millions of entries well.

The expected file format is also plain text, with one line per exception, and the line format is as follows:

map-from-tokens => map-to-token

Example file:

AT & T => AT&T
AT&T => AT&T
Standarten   Fuehrer => standartenfuhrer
Standarten Fuhrer => standartenfuhrer
MS Windows => ms windows
Microsoft Windows => ms windows
C++ => cplusplus
c++ => cplusplus
C plus plus => cplusplus

All tokens here are case sensitive: they will not be processed by charset_table rules. Thus, with the example exceptions file above, "At&t" text will be tokenized as two keywords "at" and "t", because of lowercase letters. On the other hand, "AT&T" will match exactly and produce single "AT&T" keyword.

Note that this map-to keyword is a) always interpereted as a single word, and b) is both case and space sensitive! In our sample, "ms windows" query will not match the document with "MS Windows" text. The query will be interpreted as a query for two keywords, "ms" and "windows". And what "MS Windows" gets mapped to is a single keyword "ms windows", with a space in the middle. On the other hand, "standartenfuhrer" will retrieve documents with "Standarten Fuhrer" or "Standarten Fuehrer" contents (capitalized exactly like this), or any capitalization variant of the keyword itself, eg. "staNdarTenfUhreR". (It won't catch "standarten fuhrer", however: this text does not match any of the listed exceptions because of case sensitivity, and gets indexed as two separate keywords.)

Whitespace in the map-from tokens list matters, but its amount does not. Any amount of the whitespace in the map-form list will match any other amount of whitespace in the indexed document or query. For instance, "AT & T" map-from token will match "AT    &  T" text, whatever the amount of space in both map-from part and the indexed text. Such text will therefore be indexed as a special "AT&T" keyword, thanks to the very first entry from the sample.

Exceptions also allow to capture special characters (that are exceptions from general charset_table rules; hence the name). Assume that you generally do not want to treat '+' as a valid character, but still want to be able search for some exceptions from this rule such as 'C++'. The sample above will do just that, totally independent of what characters are in the table and what are not.

Exceptions are applied to raw incoming document and query data during indexing and searching respectively. Therefore, to pick up changes in the file it's required to reindex and restart searchd.

Example:
exceptions = /usr/local/sphinx/data/exceptions.txt

9.2.11. min_word_len

Minimum indexed word length. Optional, default is 1 (index everything).

Only those words that are not shorter than this minimum will be indexed. For instance, if min_word_len is 4, then 'the' won't be indexed, but 'they' will be.

Example:
min_word_len = 4

9.2.12. charset_type

Character set encoding type. Optional, default is 'sbcs'. Known values are 'sbcs' and 'utf-8'.

Different encodings have different methods for mapping their internal characters codes into specific byte sequences. Two most common methods in use today are single-byte encoding and UTF-8. Their corresponding charset_type values are 'sbcs' (stands for Single Byte Character Set) and 'utf-8'. The selected encoding type will be used everywhere where the index is used: when indexing the data, when parsing the query against this index, when generating snippets, etc.

Note that while 'utf-8' implies that the decoded values must be treated as Unicode codepoint numbers, there's a family of 'sbcs' encodings that may in turn treat different byte values differently, and that should be properly reflected in your charset_table settings. For example, the same byte value of 224 (0xE0 hex) maps to different Russian letters depending on whether koi-8r or windows-1251 encoding is used.

Example:
charset_type = utf-8

9.2.13. charset_table

Accepted characters table, with case folding rules. Optional, default value depends on charset_type value.

charset_table is the main workhorse of Sphinx tokenizing process, ie. the process of extracting keywords from document text or query txet. It controls what characters are accepted as valid and what are not, and how the accepted characters should be transformed (eg. should the case be removed or not).

You can think of charset_table as of a big table that has a mapping for each and every of 100K+ characters in Unicode (or as of a small 256-character table if you're using SBCS). By default, every character maps to 0, which means that it does not occur within keywords and should be treated as a separator. Once mentioned in the table, character is mapped to some other character (most frequently, either to itself or to a lowercase letter), and is treated as a valid keyword part.

The expected value format is a commas-separated list of mappings. Two simplest mappings simply declare a character as valid, and map a single character to another single character, respectively. But specifying the whole table in such form would result in bloated and barely manageable specifications. So there are several syntax shortcuts that let you map ranges of characters at once. The complete list is as follows:

A->a
Single char mapping, declares source char 'A' as allowed to occur within keywords and maps it to destination char 'a' (but does not declare 'a' as allowed).
A..Z->a..z
Range mapping, declares all chars in source range as allowed and maps them to the destination range. Does not declare destination range as allowed. Also checks ranges' lengths (the lengths must be equal).
a
Stray char mapping, declares a character as allowed and maps it to itself. Equivalent to a->a single char mapping.
a..z
Stray range mapping, declares all characters in range as allowed and maps them to themselves. Equivalent to a..z->a..z range mapping.
A..Z/2
Checkerboard range map. Maps every pair of chars to the second char. More formally, declares odd characters in range as allowed and maps them to the even ones; also declares even characters as allowed and maps them to themselves. For instance, A..Z/2 is equivalent to A->B, B->B, C->D, D->D, ..., Y->Z, Z->Z. This mapping shortcut is helpful for a number of Unicode blocks where uppercase and lowercase letters go in such interleaved order instead of contiguous chunks.

Control characters with codes from 0 to 31 are always treated as separators. Characters with codes 32 to 127, ie. 7-bit ASCII characters, can be used in the mappings as is. To avoid configuration file encoding issues, 8-bit ASCII characters and Unicode characters must be specified in U+xxx form, where 'xxx' is hexadecimal codepoint number. This form can also be used for 7-bit ASCII characters to encode special ones: eg. use U+20 to encode space, U+2E to encode dot, U+2C to encode comma.

Example:
# 'sbcs' defaults for English and Russian
charset_table = 0..9, A..Z->a..z, _, a..z, \
	U+A8->U+B8, U+B8, U+C0..U+DF->U+E0..U+FF, U+E0..U+FF

# 'utf-8' defaults for English and Russian
charset_table = 0..9, A..Z->a..z, _, a..z, \
	U+410..U+42F->U+430..U+44F, U+430..U+44F

9.2.14. ignore_chars

Ignored characters list. Optional, default is empty.

Useful in the cases when some characters, such as soft hyphenation mark (U+00AD), should be not just treated as separators but rather fully ignored. For example, if '-' is simply not in the charset_table, "abc-def" text will be indexed as "abc" and "def" keywords. On the contrary, if '-' is added to ignore_chars list, the same text will be indexed as a single "abcdef" keyword.

The syntax is the same as for charset_table, but it's only allowed to declare characters, and not allowed to map them. Also, the ignored characters must not be present in charset_table.

Example:
ignore_chars = U+AD

9.2.15. min_prefix_len

Minimum word prefix length to index. Optional, default is 0 (do not index prefixes).

Prefix indexing allows to implement wildcard searching by 'wordstart*' wildcards (refer to enable_star option for details on wildcard syntax). When mininum prefix length is set to a positive number, indexer will index all the possible keyword prefixes (ie. word beginnings) in addition to the keywords themselves. Too short prefixes (below the minimum allowed length) will not be indexed.

For instance, indexing a keyword "example" with min_prefix_len=3 will result in indexing "exa", "exam", "examp", "exampl" prefixes along with the word itself. Searches against such index for "exam" will match documents that contain "example" word, even if they do not contain "exam" on itself. However, indexing prefixes will make the index grow significantly (because of many more indexed keywords), and will degrade both indexing and searching times.

There's no automatic way to rank perfect word matches higher in a prefix index, but there's a number of tricks to achieve that. First, you can setup two indexes, one with prefix indexing and one without it, search through both, and use SetIndexWeights() call to combine weights. Second, you can enable star-syntax and rewrite your extended-mode queries:

# in sphinx.conf
enable_star = 1

// in query
$cl->Query ( "( keyword | keyword* ) other keywords" );

Example:
min_prefix_len = 3

9.2.16. min_infix_len

Minimum infix prefix length to index. Optional, default is 0 (do not index infixes).

Infix indexing allows to implement wildcard searching by 'start*', '*end', and '*middle*' wildcards (refer to enable_star option for details on wildcard syntax). When mininum infix length is set to a positive number, indexer will index all the possible keyword infixes (ie. substrings) in addition to the keywords themselves. Too short infixes (below the minimum allowed length) will not be indexed.

For instance, indexing a keyword "test" with min_infix_len=2 will result in indexing "te", "es", "st", "tes", "est" infixes along with the word itself. Searches against such index for "es" will match documents that contain "test" word, even if they do not contain "es" on itself. However, indexing infixes will make the index grow significantly (because of many more indexed keywords), and will degrade both indexing and searching times.

There's no automatic way to rank perfect word matches higher in an infix index, but the same tricks as with prefix indexes can be applied.

Example:
min_infix_len = 3

9.2.17. prefix_fields

The list of full-text fields to limit prefix indexing to. Optional, default is empty (index all fields in prefix mode).

Because prefix indexing impacts both indexing and searching performance, it might be desired to limit it to specific full-text fields only: for instance, to provide prefix searching through URLs, but not through page contents. prefix_fields specifies what fields will be prefix-indexed; all other fields will be indexed in normal mode. The value format is a comma-separated list of field names.

Example:
prefix_fields = url, domain

9.2.18. infix_fields

The list of full-text fields to limit infix indexing to. Optional, default is empty (index all fields in infix mode).

Similar to prefix_fields, but lets you limit infix-indexing to given fields.

Example:
infix_fields = url, domain

9.2.19. enable_star

Enables star-syntax (or wildcard syntax) when searching through prefix/infix indexes. Optional, default is is 0 (do not use wildcard syntax), for compatibility with 0.9.7. Known values are 0 and 1.

This feature enables "star-syntax", or wildcard syntax, when searching through indexes which were created with prefix or infix indexing enabled. It only affects searching; so it can be changed without reindexing by simply restarting searchd.

The default value is 0, that means to disable star-syntax and treat all keywords as prefixes or infixes respectively, depending on indexing-time min_prefix_len and min_infix_len settings. The value of 1 means that star ('*') can be used at the start and/or the end of the keyword. The star will match zero or more characters.

For example, assume that the index was built with infixes and that enable_star is 1. Searching should work as follows:

  1. "abcdef" query will match only those documents that contain the exact "abcdef" word in them.
  2. "abc*" query will match those documents that contain any words starting with "abc" (including the documents which contain the exact "abc" word only);
  3. "*cde*" query will match those documents that contain any words which have "cde" characters in any part of the word (including the documents which contain the exact "cde" word only).
  4. "*def" query will match those documents that contain any words ending with "def" (including the documents that contain the exact "def" word only).

Example:
enable_star = 1

9.2.20. ngram_len

N-gram lengths for N-gram indexing. Optional, default is 0 (disable n-gram indexing). Known values are 0 and 1 (other lengths to be implemented).

N-grams provide basic CJK (Chinese, Japanse, Koreasn) support for unsegmented texts. The issue with CJK searching is that there could be no clear separators between the words. Ideally, the texts would be filtered through a special program called segmenter that would insert separators in proper locations. However, segmenters are slow and error prone, and it's common to index contiguous groups of N characters, or n-grams, instead.

When this feature is enabled, streams of CJK characters are indexed as N-grams. For example, if incoming text is "ABCDEF" (where A to F represent some CJK characters) and length is 1, in will be indexed as if it was "A B C D E F". (With length equal to 2, it would produce "AB BC CD DE EF"; but only 1 is supported at the moment.) Only those characters that are listed in ngram_chars table will be split this way; other ones will not be affected.

Note that if search query is segmented, ie. there are separators between individual words, then wrapping the words in quotes and using extended mode will resut in proper matches being found even if the text was not segmented. For instance, assume that the original query is BC DEF. After wrapping in quotes on the application side, it should look like "BC" "DEF" (with quotes). This query will be passed to Sphinx and internally split into 1-grams too, resulting in "B C" "D E F" query, still with quotes that are the phrase matching operator. And it will match the text even though there were no separators in the text.

Even if the search query is not segmented, Sphinx should still produce good results, thanks to phrase based ranking: it will pull closer phrase matches (which in case of N-gram CJK words can mean closer multi-character word matches) to the top.

Example:
ngram_len = 1

9.2.21. ngram_chars

N-gram characters list. Optional, default is empty.

To be used in conjunction with in ngram_len, this list defines characters, sequences of which are subject to N-gram extraction. Words comprised of other characters will not be affected by N-gram indexing feature. The value format is identical to charset_table.

Example:
ngram_chars = U+3000..U+2FA1F

9.2.22. phrase_boundary

Phrase boundary characters list. Optional, default is empty.

This list controls what characters will be treated as phrase boundaries, in order to adjust word positions and enable phrase-level search emulation through proximity search. The syntax is similar to charset_table. Mappings are not allowed and the boundary characters must not overlap with anything else.

On phrase boundary, additional word position increment (specified by phrase_boundary_step) will be added to current word position. This enables phrase-level searching through proximity queries: words in different phrases will be guaranteed to be more than phrase_boundary_step distance away from each other; so proximity search within that distance will be equivalent to phrase-level search.

Phrase boundary condition will be raised if and only if such character is followed by a separator; this is to avoid abbreviations such as S.T.A.L.K.E.R or URLs being treated as several phrases.

Example:
phrase_boundary = ., ?, !, U+2026 # horizontal ellipsis

9.2.23. phrase_boundary_step

Phrase boundary word position increment. Optional, default is 0.

On phrase boundary, current word position will be additionally incremented by this number. See phrase_boundary for details.

Example:
phrase_boundary_step = 100

9.2.24. html_strip

Whether to strip HTML markup from incoming full-text data. Optional, default is 0. Known values are 0 (disable stripping) and 1 (enable stripping).

Stripping does not work with xmlpipe source type (it's suggested to upgrade to xmlpipe2 anyway). It should work with properly formed HTML and XHTML, but, just as most browsers, may produce unexpected results on malformed input (such as HTML with stray <'s or unclosed >'s).

Only the tags themselves, and also HTML comments, are stripped. To strip the contents of the tags too (eg. to strip embedded scripts), see html_remove_elements option. There are no restrictions on tag names; ie. everything that looks like a valid tag start, or end, or a comment will be stripped.

Example:
html_strip = 1

9.2.25. html_index_attrs

A list of markup attributes to index when stripping HTML. Optional, default is emptu (do not index markup attributes).

Specifies HTML markup attributes whose contents should be retained and indexed even though other HTML markup is stripped. The format is per-tag enumeration of indexable attributes, as shown in the example below.

Example:
html_index_attrs = img=alt,title; a=title;

9.2.26. html_remove_elements

A list of HTML elements for which to strip contents along with the elements themselves. Optional, default is empty string (do not strip contents of any elements).

This feature allows to strip element contents, ie. everything that is between the opening and the closing tags. It is useful to remove embedded scripts, CSS, etc. Short tag form for empty elements (ie. <br />) is properly supported; ie. the text that follows such tag will not be removed.

The value is a comma-separated list of element (tag) names whose contents should be removed. Tag names are case insensitive.

Example:
html_remove_elements = style, script

9.2.27. local

Local index declaration in the distributed index. Multi-value, optional, default is empty.

This setting is used to declare local indexes that will be searched when given distributed index is searched. All local indexes will be searched sequentially, utilizing only 1 CPU or core; to parallelize processing, you can configure searchd to query itself (refer to Section 9.2.28, “agent” for the details). There might be several local indexes declared per each distributed index. Any local index can be mentioned several times in other distributed indexes.

Example:
local = chunk1
local = chunk2

9.2.28. agent

Remote agent declaration in the distributed index. Multi-value, optional, default is empty.

This setting is used to declare remote agents that will be searched when given distributed index is searched. The agents can be thought of as network pointers that specify host, port, and index names. In the basic case agents would correspond to remote physical machines. More formally, that is not always correct: you can point several agents to the same remote machine; or you can even point agents to the very same single instance of searchd (in order to utilize many CPUs or cores).

The value format is as follows:

agent = specification:remote-indexes-list
specification = hostname ":" port | path

Where 'hostname' is remote host name; 'port' is remote TCP port; 'path' is Unix-domain socket path and 'remote-indexes-list' is a comma-separated list of remote index names.

All agents will be searched in parallel. However, all indexes specified for a given agent will be searched sequentially in this agent. This lets you fine-tune the configuration to the hardware. For instance, if two remote indexes are stored on the same physical HDD, it's better to configure one agent with several sequentially searched indexes to avoid HDD steping. If they are stored on different HDDs, having two agents will be advantageous, because the work will be fully parallelized. The same applies to CPUs; though CPU performance impact caused by two processes stepping on each other is somewhat smaller and frequently can be ignored at all.

On machines with many CPUs and/or HDDs, agents can be pointed to the same machine to utilize all of the hardware in parallel and reduce query latency. There is no need to setup several searchd instances for that; it's legal to configure the instance to contact itself. Here's an example setup, intended for a 4-CPU machine, that will use up to 4 CPUs in parallel to process each query:

index dist
{
	type = distributed
	local = chunk1
	agent = localhost:3312:chunk2
	agent = localhost:3312:chunk3
	agent = localhost:3312:chunk4
}

Note how one of the chunks is searched locally and the same instance of searchd queries itself to launch searches through three other ones in parallel.

Example:
agent = localhost:3312:chunk2 # contact itself
agent = /var/run/searchd.s:chunk2
agent = searchbox2:3312:chunk3,chunk4 # search remote indexes

9.2.29. agent_blackhole

Remote blackhole agent declaration in the distributed index. Multi-value, optional, default is empty. Introduced in version 0.9.9.

agent_blackhole lets you fire-and-forget queries to remote agents. That is useful for debugging (or just testing) production clusters: you can setup a separate debugging/testing searchd instance, and forward the requests to this instance from your production master (aggregator) instance without interfering with production work. Master searchd will attempt to connect and query blackhole agent normally, but it will neither wait nor process any responses. Also, all network errors on blackhole agents will be ignored. The value format is completely identical to regular agent directive.

Example:
agent_blackhole = testbox:3312:testindex1,testindex2

9.2.30. agent_connect_timeout

Remote agent connection timeout, in milliseconds. Optional, default is 1000 (ie. 1 second).

When connecting to remote agents, searchd will wait at most this much time for connect() call to complete succesfully. If the timeout is reached but connect() does not complete, and retries are enabled, retry will be initiated.

Example:
agent_connect_timeout = 300

9.2.31. agent_query_timeout

Remote agent query timeout, in milliseconds. Optional, default is 3000 (ie. 3 seconds).

After connection, searchd will wait at most this much time for remote queries to complete. This timeout is fully separate from connection timeout; so the maximum possible delay caused by a remote agent equals to the sum of agent_connection_timeout and agent_query_timeout. Queries will not be retried if this timeout is reached; a warning will be produced instead.

Example:
agent_query_timeout = 10000 # our query can be long, allow up to 10 sec

9.2.32. preopen

Whether to pre-open all index files, or open them per each query. Optional, default is 0 (do not preopen).

This option tells searchd that it should pre-open all index files on startup (or rotation) and keep them open while it runs. Currently, the default mode is not to pre-open the files (this may change in the future). Preopened indexes take a few (currently 2) file descriptors per index. However, they save on per-query open() calls; and also they are invulnerable to subtle race conditions that may happen during index rotation under high load. On the other hand, when serving many indexes (100s to 1000s), it still might be desired to open the on per-query basis in order to save file descriptors.

This directive does not affect indexer in any way, it only affects searchd.

Example:
preopen = 1

9.2.33. ondisk_dict

Whether to keep the dictionary file (.spi) for this index on disk, or precache it in RAM. Optional, default is 0 (precache in RAM). Introduced in version 0.9.9.

The dictionary (.spi) can be either kept on RAM or on disk. The default is to fully cache it in RAM. That improves performance, but might cause too much RAM pressure, especially if prefixes or infixes were used. Enabling ondisk_dict results in 1 additional disk IO per keyword per query, but reduces memory footprint.

This directive does not affect indexer in any way, it only affects searchd.

Example:
ondisk_dict = 1

9.2.34. inplace_enable

Whether to enable in-place index inversion. Optional, default is 0 (use separate temporary files). Introduced in version 0.9.9.

inplace_enable greatly reduces indexing disk footprint, at a cost of slightly slower indexing (it uses around 2x less disk, but yields around 90-95% the original performance).

Indexing involves two major phases. The first phase collects, processes, and partially sorts documents by keyword, and writes the intermediate result to temporary files (.tmp*). The second phase fully sorts the documents, and creates the final index files. Thus, rebuilding a production index on the fly involves around 3x peak disk footprint: 1st copy for the intermediate temporary files, 2nd copy for newly constructed copy, and 3rd copy for the old index that will be serving production queries in the meantime. (Intermediate data is comparable in size to the final index.) That might be too much disk footprint for big data collections, and inplace_enable allows to reduce it. When enabled, it reuses the temporary files, outputs the final data back to them, and renames them on completion. However, this might require additional temporary data chunk relocation, which is where the performance impact comes from.

This directive does not affect searchd in any way, it only affects indexer.

Example:
inplace_enable = 1

9.2.35. inplace_hit_gap

In-place inversion fine-tuning option. Controls preallocated hitlist gap size. Optional, default is 0. Introduced in version 0.9.9.

This directive does not affect searchd in any way, it only affects indexer.

Example:
inplace_hit_gap = 1M

9.2.36. inplace_docinfo_gap

In-place inversion fine-tuning option. Controls preallocated docinfo gap size. Optional, default is 0. Introduced in version 0.9.9.

This directive does not affect searchd in any way, it only affects indexer.

Example:
inplace_docinfo_gap = 1M

9.2.37. inplace_reloc_factor

In-place inversion fine-tuning option. Controls relocation buffer size within indexing memory arena. Optional, default is 0.1. Introduced in version 0.9.9.

This directive does not affect searchd in any way, it only affects indexer.

Example:
inplace_reloc_factor = 0.1

9.2.38. inplace_write_factor

In-place inversion fine-tuning option. Controls in-place write buffer size within indexing memory arena. Optional, default is 0.1. Introduced in version 0.9.9.

This directive does not affect searchd in any way, it only affects indexer.

Example:
inplace_write_factor = 0.1

9.2.39. index_exact_words

Whether to index the original keywords along with the stemmed/remapped versions. Optional, default is 0 (do not index). Introduced in version 0.9.9.

When enabled, index_exact_words forces indexer to put the raw keywords in the index along with the stemmed versions. That, in turn, enables exact form operator in the query language to work. This impacts the index size and the indexing time. However, searching performance is not impacted at all.

Example:
index_exact_words = 1

9.3. indexer program configuration options

9.3.1. mem_limit

Indexing RAM usage limit. Optional, default is 32M.

Enforced memory usage limit that the indexer will not go above. Can be specified in bytes, or kilobytes (using K postfix), or megabytes (using M postfix); see the example. This limit will be automatically raised if set to extremely low value causing I/O buffers to be less than 8 KB; the exact lower bound for that depends on the indexed data size. If the buffers are less than 256 KB, a warning will be produced.

Maximum possible limit is 2047M. Too low values can hurt indexing speed, but 256M to 1024M should be enough for most if not all datasets. Setting this value too high can cause SQL server timeouts. During the document collection phase, there will be periods when the memory buffer is partially sorted and no communication with the database is performed; and the database server can timeout. You can resolve that either by raising timeouts on SQL server side or by lowering mem_limit.

Example:
mem_limit = 256M
# mem_limit = 262144K # same, but in KB
# mem_limit = 268435456 # same, but in bytes

9.3.2. max_iops

Maximum I/O operations per second, for I/O throttling. Optional, default is 0 (unlimited).

I/O throttling related option. It limits maximum count of I/O operations (reads or writes) per any given second. A value of 0 means that no limit is imposed.

indexer can cause bursts of intensive disk I/O during indexing, and it might desired to limit its disk activity (and keep something for other programs running on the same machine, such as searchd). I/O throttling helps to do that. It works by enforcing a minimum guaranteed delay between subsequent disk I/O operations performed by indexer. Modern SATA HDDs are able to perform up to 70-100+ I/O operations per second (that's mostly limited by disk heads seek time). Limiting indexing I/O to a fraction of that can help reduce search performance dedgradation caused by indexing.

Example:
max_iops = 40

9.3.3. max_iosize

Maximum allowed I/O operation size, in bytes, for I/O throttling. Optional, default is 0 (unlimited).

I/O throttling related option. It limits maximum file I/O operation (read or write) size for all operations performed by indexer. A value of 0 means that no limit is imposed. Reads or writes that are bigger than the limit will be split in several smaller operations, and counted as several operation by max_iops setting. At the time of this writing, all I/O calls should be under 256 KB (default internal buffer size) anyway, so max_iosize values higher than 256 KB must not affect anything.

Example:
max_iosize = 1048576

9.4. searchd program configuration options

9.4.1. listen

This setting lets you specify IP address and port, or Unix-domain socket path, that searchd will listen on. Introduced in version 0.9.9.

The informal grammar for listen setting is:

listen = address ":" port | port | path

I.e. you can specify either an IP address (or hostname) and port number, or just a port number, or Unix socket path. If you specify port number but not the address, searchd will listen on all network interfaces. Unix path is identified by a leading slash.

Examples:
listen = localhost
listen = localhost:5000
listen = 192.168.0.1 
listen = 192.168.0.1:5000
listen = /var/run/sphinx.s
listen = 3312

There can be multiple listen directives, searchd will listen for client connections on all specified ports and sockets. If no listen directive is found then the server will listen on all available interfaces using the default port (which is 3312).

Unix-domain sockets are not supported on Windows.

9.4.2. address

Interface IP address to bind on. Optional, default is 0.0.0.0 (ie. listen on all interfaces). DEPRECATED, use listen instead.

address setting lets you specify which network interface searchd will bind to, listen on, and accept incoming network connections on. The default value is 0.0.0.0 which means to listen on all interfaces. At the time, you can not specify multiple interfaces.

Example:
address = 192.168.0.1

9.4.3. port

searchd TCP port number. DEPRECATED, use listen instead. Used to be mandatory. Default port number is 3312.

Example:
port = 3312

9.4.4. log

Log file name. Optional, default is 'searchd.log'. All searchd run time events will be logged in this file.

Example:
log = /var/log/searchd.log

9.4.5. query_log

Query log file name. Optional, default is empty (do not log queries). All search queries will be logged in this file. The format is described in Section 4.8, “searchd query log format”.

Example:
query_log = /var/log/query.log

9.4.6. read_timeout

Network client request read timeout, in seconds. Optional, default is 5 seconds. searchd will forcibly close the client connections which fail to send a query within this timeout.

Example:
read_timeout = 1

9.4.7. client_timeout

Maximum time to wait between requests (in seconds) when using persistent connections. Optional, default is five minutes.

Example:
client_timeout = 3600

9.4.8. max_children

Maximum amount of children to fork (or in other words, concurrent searches to run in parallel). Optional, default is 0 (unlimited).

Useful to control server load. There will be no more than this much concurrent searches running, at all times. When the limit is reached, additional incoming clients are dismissed with temporarily failure (SEARCHD_RETRY) status code and a message stating that the server is maxed out.

Example:
max_children = 10

9.4.9. pid_file

searchd process ID file name. Mandatory.

PID file will be re-created (and locked) on startup. It will contain head daemon process ID while the daemon is running, and it will be unlinked on daemon shutdown. It's mandatory because Sphinx uses it internally for a number of things: to check whether there already is a running instance of searchd; to stop searchd; to notify it that it should rotate the indexes. Can also be used for different external automation scripts.

Example:
pid_file = /var/run/searchd.pid

9.4.10. max_matches

Maximum amount of matches that the daemon keeps in RAM for each index and can return to the client. Optional, default is 1000.

Introduced in order to control and limit RAM usage, max_matches setting defines how much matches will be kept in RAM while searching each index. Every match found will still be processed; but only best N of them will be kept in memory and return to the client in the end. Assume that the index contains 2,000,000 matches for the query. You rarely (if ever) need to retrieve all of them. Rather, you need to scan all of them, but only choose "best" at most, say, 500 by some criteria (ie. sorted by relevance, or price, or anything else), and display those 500 matches to the end user in pages of 20 to 100 matches. And tracking only the best 500 matches is much more RAM and CPU efficient than keeping all 2,000,000 matches, sorting them, and then discarding everything but the first 20 needed to display the search results page. max_matches controls N in that "best N" amount.

This parameter noticeably affects per-query RAM and CPU usage. Values of 1,000 to 10,000 are generally fine, but higher limits must be used with care. Recklessly raising max_matches to 1,000,000 means that searchd will have to allocate and initialize 1-million-entry matches buffer for every query. That will obviously increase per-query RAM usage, and in some cases can also noticeably impact performance.

CAVEAT EMPTOR! Note that there also is another place where this limit is enforced. max_matches can be decreased on the fly through the corresponding API call, and the default value in the API is also set to 1,000. So in order to retrieve more than 1,000 matches to your application, you will have to change the configuration file, restart searchd, and set proper limit in SetLimits() call. Also note that you can not set the value in the API higher than the value in the .conf file. This is prohibited in order to have some protection against malicious and/or malformed requests.

Example:
max_matches = 10000

9.4.11. seamless_rotate

Prevents searchd stalls while rotating indexes with huge amounts of data to precache. Optional, default is 1 (enable seamless rotation).

Indexes may contain some data that needs to be precached in RAM. At the moment, .spa, .spi and .spm files are fully precached (they contain attribute data, MVA data, and keyword index, respectively.) Without seamless rotate, rotating an index tries to use as little RAM as possible and works as follows:

  1. new queries are temporarly rejected (with "retry" error code);
  2. searchd waits for all currently running queries to finish;
  3. old index is deallocated and its files are renamed;
  4. new index files are renamed and required RAM is allocated;
  5. new index attribute and dictionary data is preloaded to RAM;
  6. searchd resumes serving queries from new index.

However, if there's a lot of attribute or dictionary data, then preloading step could take noticeble time - up to several minutes in case of preloading 1-5+ GB files.

With seamless rotate enabled, rotation works as follows:

  1. new index RAM storage is allocated;
  2. new index attribute and dictionary data is asynchronously preloaded to RAM;
  3. on success, old index is deallocated and both indexes' files are renamed;
  4. on failure, new index is deallocated;
  5. at any given moment, queries are served either from old or new index copy.

Seamless rotate comes at the cost of higher peak memory usage during the rotation (because both old and new copies of .spa/.spi/.spm data need to be in RAM while preloading new copy). Average usage stays the same.

Example:
seamless_rotate = 1

9.4.12. preopen_indexes

Whether to forcibly preopen all indexes on startup. Optional, default is 0 (do not preopen). Enforces enabled preopen on all served indexes, to avoid manually specifying it in every index.

Example:
preopen_indexes = 1

9.4.13. unlink_old

Whether to unlink .old index copies on succesful rotation. Optional, default is 1 (do unlink).

Example:
unlink_old = 0

9.4.14. attr_flush_period

When calling UpdateAttributes() to update document attributes in real-time, changes are first written to the in-memory copy of attributes (docinfo must be set to extern). Then, once searchd shuts down normally (via SIGTERM being sent), the changes are written to disk. Introduced in version 0.9.9.

Starting with 0.9.9, it is possible to tell searchd to periodically write these changes back to disk, to avoid them being lost. The time between those intervals is set with attr_flush_period, in seconds.

It defaults to 0, which disables the periodic flushing, but flushing will still occur at normal shut-down.

Example:
attr_flush_period = 900 # persist updates to disk every 15 minutes

9.4.15. ondisk_dict_default

Instance-wide defaults for ondisk_dict directive. Optional, default it 0 (precache dictionaries in RAM). Introduced in version 0.9.9.

This directive lets you specify the default value of ondisk_dict for all the indexes served by this copy of searchd. Per-index directive take precedence, and will overwrite this instance-wide default value, allowing for fine-grain control.

Example:
ondisk_dict_default = 1 # keep all dictionaries on disk

9.4.16. max_packet_size

Maximum allowed network packet size. Limits both query packets from clients, and response packets from remote agents in distributed environment. Only used for internal sanity checks, does not directly affect RAM use or performance. Optional, default is 8M. Introduced in version 0.9.9.

Example:
max_packet_size = 32M

9.4.17. mva_updates_pool

Shared pool size for in-memory MVA updates storage. Optional, default size is 1M. Introduced in version 0.9.9.

This setting controls the size of the shared storage pool for updated MVA values. Specifying 0 for the size disable MVA updates at all. Once the pool size limit is hit, MVA update attempts will result in an error. However, updates on regular (scalar) attributes will still work. Due to internal technical difficulties, currently it is not possible to store (flush) any updates on indexes where MVA were updated; though this might be implemented in the future. In the meantime, MVA updates are intended to be used as a measure to quickly catchup with latest changes in the database until the next index rebuild; not as a persistent storage mechanism.

Example:
mva_updates_pool = 16M

9.4.18. crash_log_path

Path (formally prefix) for the crash log files. Optional, default is empty (do not create crash logs). Introduced in version 0.9.9.

This is a debugging setting, to help catch rare offending queries causing crashes without otherwise affecting production instances. When enabled, searchd will intercept crash signals such as SIGSEGV, and dump offending query packets to files named "crash_log_path.PID", where PID is crashed process ID.

Example:
crash_log_path = /home/sphinx/log/crashlog

9.4.19. max_filters

Maximum allowed per-query filter count. Only used for internal sanity checks, does not directly affect RAM use or performance. Optional, default is 256. Introduced in version 0.9.9.

Example:
max_filters = 1024

9.4.20. max_filter_values

Maximum allowed per-filter values count. Only used for internal sanity checks, does not directly affect RAM use or performance. Optional, default is 4096. Introduced in version 0.9.9.

Example:
max_filter_values = 16384

A. Sphinx revision history

A.1. Version 0.9.9-rc1, 17 nov 2008

  • added min_stemming_len directive
  • added IsConnectError() API call (helps distingusih API vs remote errors)
  • added duplicate log messages filter to searchd
  • added --nodetach debugging switch to searchd
  • added blackhole agents support for debugging/testing (agent_blackhole directive)
  • added max_filters, max_filter_values directives (were hardcoded before)
  • added int64 expression evaluation path, automatic inference, and BIGINT() enforcer function
  • added crash handler for debugging (crash_log_path directive)
  • added MS SQL (aka SQL Server) source support (Windows only, mssql_winauth and mssql_unicode directives)
  • added indexer-side column unpacking feature (unpack_zlib, unpack_mysqlcompress directives)
  • added nested brackers and NOTs support to query language, rewritten query parser
  • added persistent connections support (Open() and Close() API calls)
  • added index_exact_words feature, and exact form operator to query language ("hello =world")
  • added status variables support to SphinxSE (SHOW STATUS LIKE 'sphinx_%')
  • added max_packet_size directive (was hardcoded at 8M before)
  • added UNIX socket support, and multi-interface support (listen directive)
  • added star-syntax support to BuildExcerpts() API call
  • added inplace inversion of .spa and .spp (inplace_enable directive, 1.5-2x less disk space for indexing)
  • added builtin Czech stemmer (morphology=stem_cz)
  • added IDIV(), NOW(), INTERVAL(), IN() functions to expressions
  • added index-level early-reject based on filters
  • added MVA updates feature (mva_updates_pool directive)
  • added select-list feature with computed expressions support (see SetSelect() API call, test.php --select switch), protocol 1.22
  • added integer expressions support (2x faster than float)
  • added multiforms support (multiple source words in wordforms file)
  • added legacy rankers (MATCH_ALL/MATCH_ANY/etc), removed legacy matching code (everything runs on V2 engine now)
  • added field position limit modifier to field operator (syntax: @title[50] hello world)
  • added killlist support (sql_query_killlist directive, --merge-killlists switch)
  • added on-disk SPI support (ondisk_dict directive)
  • added indexer IO stats
  • added periodic .spa flush (attr_flush_period directive)
  • added config reload on SIGHUP
  • added per-query attribute overrides feature (see SetOverride() API call); protocol 1.21
  • added signed 64bit attrs support (sql_attr_bigint directive)
  • improved HTML stripper to also skip PIs (<? ... ?>, such as <?php ... ?>)
  • improved excerpts speed (upto 50x faster on big documents)
  • fixed a short window of searchd inaccessibility on startup (started listen()ing too early before)
  • fixed .spa loading on systems where read() is 2GB capped
  • fixed infixes vs morphology issues
  • fixed backslash escaping, added backslash to EscapeString()
  • fixed handling of over-2GB dictionary files (.spi)

A.2. Version 0.9.8.1, 30 oct 2008

  • added configure script to libsphinxclient
  • changed proximity/quorum operator syntax to require whitespace after length
  • fixed potential head process crash on SIGPIPE during "maxed out" message
  • fixed handling of incomplete remote replies (caused over-degraded distributed results, in rare cases)
  • fixed sending of big remote requests (caused distributed requests to fail, in rare cases)
  • fixed FD_SET() overflow (caused searchd to crash on startup, in rare cases)
  • fixed MVA vs distributed indexes (caused loss of 1st MVA value in result set)
  • fixed tokenizing of exceptions terminated by specials (eg. "GPS AT&T" in extended mode)
  • fixed buffer overrun in stemmer on overlong tokens occasionally emitted by proximity/quorum operator parser (caused crashes on certain proximity/quorum queries)
  • fixed wordcount ranker (could be dropping hits)
  • fixed --merge feature (numerous different fixes, caused broken indexes)
  • fixed --merge-dst-range performance
  • fixed prefix/infix generation for stopwords
  • fixed ignore_chars vs specials
  • fixed misplaced F_SETLKW check (caused certain build types, eg. RPM build on FC8, to fail)
  • fixed dictionary-defined charsets support in spelldump, added \x-style wordchars support
  • fixed Java API to properly send long strings (over 64K; eg. long document bodies for excerpts)
  • fixed Python API to accept offset/limit of 'long' type
  • fixed default ID range (that filtered out all 64-bit values) in Java and Python APIs

A.3. Version 0.9.8, 14 jul 2008

Indexing

  • added support for 64-bit document and keyword IDs, --enable-id64 switch to configure
  • added support for floating point attributes
  • added support for bitfields in attributes, sql_attr_bool directive and bit-widths part in sql_attr_uint directive
  • added support for multi-valued attributes (MVA)
  • added metaphone preprocessor
  • added libstemmer library support, provides stemmers for a number of additional languages
  • added xmlpipe2 source type, that supports arbitrary fields and attributes
  • added word form dictionaries, wordforms directive (and spelldump utility)
  • added tokenizing exceptions, exceptions directive
  • added an option to fully remove element contents to HTML stripper, html_remove_elements directive
  • added HTML entities decoder (with full XHTML1 set support) to HTML stripper
  • added per-index HTML stripping settings, html_strip, html_index_attrs, and html_remove_elements directives
  • added IO load throttling, max_iops and max_iosize directives
  • added SQL load throttling, sql_ranged_throttle directive
  • added an option to index prefixes/infixes for given fields only, prefix_fields and infix_fields directives
  • added an option to ignore certain characters (instead of just treating them as whitespace), ignore_chars directive
  • added an option to increment word position on phrase boundary characters, phrase_boundary and phrase_boundary_step directives
  • added --merge-dst-range switch (and filters) to index merging feature (--merge switch)
  • added mysql_connect_flags directive (eg. to reduce indexing time MySQL network traffic and/or time)
  • improved ordinals sorting; now runs in fixed RAM
  • improved handling of documents with zero/NULL ids, now skipping them instead of aborting

Search daemon

  • added an option to unlink old index on succesful rotation, unlink_old directive
  • added an option to keep index files open at all times (fixes subtle races on rotation), preopen and preopen_indexes directives
  • added an option to profile searchd disk I/O, --iostats command-line option
  • added an option to rotate index seamlessly (fully avoids query stalls), seamless_rotate directive
  • added HTML stripping support to excerpts (uses per-index settings)
  • added 'exact_phrase', 'single_passage', 'use_boundaries', 'weight_order 'options to BuildExcerpts() API call
  • added distributed attribute updates propagation
  • added distributed retries on master node side
  • added log reopen on SIGUSR1
  • added --stop switch (sends SIGTERM to running instance)
  • added Windows service mode, and --servicename switch
  • added Windows --rotate support
  • improved log timestamping, now with millisecond precision

Querying

  • added extended engine V2 (faster, cleaner, better; SPH_MATCH_EXTENDED2 mode)
  • added ranking modes support (V2 engine only; SetRankingMode() API call)
  • added quorum searching support to query language (V2 engine only; example: "any three of all these words"/3)
  • added query escaping support to query language, and EscapeString() API call
  • added multi-field syntax support to query language (example: "@(field1,field2) something"), and @@relaxed field checks option
  • added optional star-syntax ('word*') support in keywords, enable_star directive (for prefix/infix indexes only)
  • added full-scan support (query must be fully empty; can perform block-reject optimization)
  • added COUNT(DISTINCT(attr)) calculation support, SetGroupDistinct() API call
  • added group-by on MVA support, SetArrayResult() PHP API call
  • added per-index weights feature, SetIndexWeights() API call
  • added geodistance support, SetGeoAnchor() API call
  • added result set sorting by arbitrary expressions in run time (eg. "@weight+log(price)*2.5"), SPH_SORT_EXPR mode
  • added result set sorting by @custom compile-time sorting function (see src/sphinxcustomsort.inl)
  • added result set sorting by @random value
  • added result set merging for indexes with different schemas
  • added query comments support (3rd arg to Query()/AddQuery() API calls, copied verbatim to query log)
  • added keyword extraction support, BuildKeywords() API call
  • added binding field weights by name, SetFieldWeights() API call
  • added optional limit on query time, SetMaxQueryTime() API call
  • added optional limit on found matches count (4rd arg to SetLimits() API call, so-called 'cutoff')

APIs and SphinxSE

  • added pure C API (libsphinxclient)
  • added Ruby API (thanks to Dmytro Shteflyuk)
  • added Java API
  • added SphinxSE support for MVAs (use varchar), floats (use float), 64bit docids (use bigint)
  • added SphinxSE options "floatrange", "geoanchor", "fieldweights", "indexweights", "maxquerytime", "comment", "host" and "port"; and support for "expr:CLAUSE"
  • improved SphinxSE max query size (using MySQL condition pushdown), upto 256K now

General

  • added scripting (shebang syntax) support to config files (example: #!/usr/bin/php in the first line)
  • added unified config handling and validation to all programs
  • added unified documentation
  • added .spec file for RPM builds
  • added automated testing suite
  • improved index locking, now fcntl()-based instead of buggy file-existence-based
  • fixed unaligned RAM accesses, now works on SPARC and ARM

Changes and fixes since 0.9.8-RC2

  • added pure C API (libsphinxclient)
  • added Ruby API
  • added SetConnectTimeout() PHP API call
  • added allowed type check to UpdateAttributes() handler (issue #174)
  • added defensive MVA checks on index preload (protection against broken indexes, issue #168)
  • added sphinx-min.conf sample file
  • added --without-iconv switch to configure
  • removed redundant -lz dependency in searchd
  • removed erroneous "xmlpipe2 deprecated" warning
  • fixed EINTR handling in piped read (issue #166)
  • fixup query time before logging and sending to client (issue #153)
  • fixed attribute updates vs full-scan early-reject index (issue #149)
  • fixed gcc warnings (issue #160)
  • fixed mysql connection attempt vs pgsql source type (issue #165)
  • fixed 32-bit wraparound when preloading over 2 GB files
  • fixed "out of memory" message vs over 2 GB allocs (issue #116)
  • fixed unaligned RAM access detection on ARM (where unaligned reads do not crash but produce wrong results)
  • fixed missing full scan results in some cases
  • fixed several bugs in --merge, --merge-dst-range
  • fixed @geodist vs MultiQuery and filters, @expr vs MultiQuery
  • fixed GetTokenEnd() vs 1-grams (was causing crash in excerpts)
  • fixed sql_query_range to handle empty strings in addition to NULL strings (Postgres specific)
  • fixed morphology=none vs infixes
  • fixed case sensitive attributes names in UpdateAttributes()
  • fixed ext2 ranking vs. stopwords (now using atompos from query parser)
  • fixed EscapeString() call
  • fixed escaped specials (now handled as whitespace if not in charset)
  • fixed schema minimizer (now handles type/size mismatches)
  • fixed word stats in extended2; stemmed form is now returned
  • fixed spelldump case folding vs dictionary-defined character sets
  • fixed Postgres BOOLEAN handling
  • fixed enforced "inline" docinfo on empty indexes (normally ok, but index merge was really confused)
  • fixed rare count(distinct) out-of-bounds issue (it occasionaly caused too high @distinct values)
  • fixed hangups on documents with id=DOCID_MAX in some cases
  • fixed rare crash in tokenizer (prefixed synonym vs. input stream eof)
  • fixed query parser vs "aaa (bbb ccc)|ddd" queries
  • fixed BuildExcerpts() request in Java API
  • fixed Postgres specific memory leak
  • fixed handling of overshort keywords (less than min_word_len)
  • fixed HTML stripper (now emits space after indexed attributes)
  • fixed 32-field case in query parser
  • fixed rare count(distinct) vs. querying multiple local indexes vs. reusable sorter issue
  • fixed sorting of negative floats in SPH_SORT_EXTENDED mode

A.4. Version 0.9.7, 02 apr 2007

  • added support for sql_str2ordinal_column
  • added support for upto 5 sort-by attrs (in extended sorting mode)
  • added support for separate groups sorting clause (in group-by mode)
  • added support for on-the-fly attribute updates (PRE-ALPHA; will change heavily; use for preliminary testing ONLY)
  • added support for zero/NULL attributes
  • added support for 0.9.7 features to SphinxSE
  • added support for n-grams (alpha, 1-grams only for now)
  • added support for warnings reported to client
  • added support for exclude-filters
  • added support for prefix and infix indexing (see max_prefix_len, max_infix_len)
  • added @* syntax to reset current field to query language
  • added removal of duplicate entries in query index order
  • added PHP API workarounds for PHP signed/unsigned braindamage
  • added locks to avoid two concurrent indexers working on same index
  • added check for existing attributes vs. docinfo=none case
  • improved groupby code a lot (better precision, and upto 25x times faster in extreme cases)
  • improved error handling and reporting
  • improved handling of broken indexes (reports error instead of hanging/crashing)
  • improved mmap() limits for attributes and wordlists (now able to map over 4 GB on x64 and over 2 GB on x32 where possible)
  • improved malloc() pressure in head daemon (search time should not degrade with time any more)
  • improved test.php command line options
  • improved error reporting (distributed query, broken index etc issues now reported to client)
  • changed default network packet size to be 8M, added extra checks
  • fixed division by zero in BM25 on 1-document collections (in extended matching mode)
  • fixed .spl files getting unlinked
  • fixed crash in schema compatibility test
  • fixed UTF-8 Russian stemmer
  • fixed requested matches count when querying distributed agents
  • fixed signed vs. unsigned issues everywhere (ranged queries, CLI search output, and obtaining docid)
  • fixed potential crashes vs. negative query offsets
  • fixed 0-match docs vs. extended mode vs. stats
  • fixed group/timestamp filters being ignored if querying from older clients
  • fixed docs to mention pgsql source type
  • fixed issues with explicit '&' in extended matching mode
  • fixed wrong assertion in SBCS encoder
  • fixed crashes with no-attribute indexes after rotate

A.5. Version 0.9.7-RC2, 15 dec 2006

  • added support for extended matching mode (query language)
  • added support for extended sorting mode (sorting clauses)
  • added support for SBCS excerpts
  • added mmap()ing for attributes and wordlist (improves search time, speeds up fork() greatly)
  • fixed attribute name handling to be case insensitive
  • fixed default compiler options to simplify post-mortem debugging (added -g, removed -fomit-frame-pointer)
  • fixed rare memory leak
  • fixed "hello hello" queries in "match phrase" mode
  • fixed issue with excerpts, texts and overlong queries
  • fixed logging multiple index name (no longer tokenized)
  • fixed trailing stopword not flushed from tokenizer
  • fixed boolean evaluation
  • fixed pidfile being wrongly unlink()ed on bind() failure
  • fixed --with-mysql-includes/libs (they conflicted with well-known paths)
  • fixes for 64-bit platforms

A.6. Version 0.9.7-RC1, 26 oct 2006

  • added alpha index merging code
  • added an option to decrease max_matches per-query
  • added an option to specify IP address for searchd to listen on
  • added support for unlimited amount of configured sources and indexes
  • added support for group-by queries
  • added support for /2 range modifier in charset_table
  • added support for arbitrary amount of document attributes
  • added logging filter count and index name
  • added --with-debug option to configure to compile in debug mode
  • added -DNDEBUG when compiling in default mode
  • improved search time (added doclist size hints, in-memory wordlist cache, and used VLB coding everywhere)
  • improved (refactored) SQL driver code (adding new drivers should be very easy now)
  • improved exceprts generation
  • fixed issue with empty sources and ranged queries
  • fixed querying purely remote distributed indexes
  • fixed suffix length check in English stemmer in some cases
  • fixed UTF-8 decoder for codes over U+20000 (for CJK)
  • fixed UTF-8 encoder for 3-byte sequences (for CJK)
  • fixed overshort (less than min_word_len) words prepended to next field
  • fixed source connection order (indexer does not connect to all sources at once now)
  • fixed line numbering in config parser
  • fixed some issues with index rotation

A.7. Version 0.9.6, 24 jul 2006

  • added support for empty indexes
  • added support for multiple sql_query_pre/post/post_index
  • fixed timestamp ranges filter in "match any" mode
  • fixed configure issues with --without-mysql and --with-pgsql options
  • fixed building on Solaris 9

A.8. Version 0.9.6-RC1, 26 jun 2006

  • added boolean queries support (experimental, beta version)
  • added simple file-based query cache (experimental, beta version)
  • added storage engine for MySQL 5.0 and 5.1 (experimental, beta version)
  • added GNU style configure script
  • added new searchd protocol (all binary, and should be backwards compatible)
  • added distributed searching support to searchd
  • added PostgreSQL driver
  • added excerpts generation
  • added min_word_len option to index
  • added max_matches option to searchd, removed hardcoded MAX_MATCHES limit
  • added initial documentation, and a working example.sql
  • added support for multiple sources per index
  • added soundex support
  • added group ID ranges support
  • added --stdin command-line option to search utility
  • added --noprogress option to indexer
  • added --index option to search
  • fixed UTF-8 decoder (3-byte codepoints did not work)
  • fixed PHP API to handle big result sets faster
  • fixed config parser to handle empty values properly
  • fixed redundant time(NULL) calls in time-segments mode

Posted by 프로그래머

2008/12/15 09:39 2008/12/15 09:39

일반적으로 사진 보정, 이미지 저작툴 하면 포토샵을 떠올릴 만큼 포토샵이 널리 사용되고 있지만 기능이 포토샵 수준이면서 소스 코드까지 공개된 무료 이미지 저작툴인 김프(Gimp)를 사용해봐도 좋을 것 같습니다. 대부분의 사람들이 기존 포토샵 UI 에 익숙해져 있을텐데 그런 분들을 위해 게시물 하단에서 스크린샷으로 소개하고 있는 김프샵을 추가로 설치해 볼 수 있습니다. 그리고 김프 사용법은 김프 홈페이지에 있는 문서를 참고할 수 있습니다. 참고로 국내 김프 사용자들과 커뮤니티 하시려면 김프 코리아에 방문해도 좋을 것 같습니다.

[GIMP] - 사진보정/이미지 저작툴
김프 홈페이지 : http://www.gimp.org/
김프 도큐먼트 : http://docs.gimp.org/ko/
김프 다운로드 : http://downloads.sourceforge.net/gimp-win/gimp-2.6.2-i686-setup.exe




[GIMPShop] - GIMP 를 포토샵 처럼 꾸며주는 툴
김프샵 홈페이지 : http://thegimpshop.net/
김프샵 다운로드 : http://www.computerdefense.org/gimpshop/gimpshop_2.2.8_fix1_setup.exe






웹프로그래머의 홈페이지 정보 블로그 http://hompy.info/

Posted by 프로그래머

2008/11/30 13:10 2008/11/30 13:10

웹사이트를 통째로 다운로드 할 수 있는 툴이 가끔 필요할 때가 있습니다. Website Copier 또는 Offline Browser 로 불리우는 3가지 무료 유틸리티를 소개합니다. 소스 코드까지 공개 되어 있는 오픈 소스 소프트웨어 HTTrack 과 Free Download Manager 도 있고 빌드된 바이너리만 있는 소프트웨어 BackStreet Browser 도 있습니다. 검색엔진에서 키워드를 "Offline Browser" 로 검색하면 다양한 유료, 무료 소프트웨어들을 찾을 수 있지만 지금 소개하는 3가지 유틸리티만 있어도 충분할 것 같습니다. 사용법은 그리 어렵지 않으니 직접 설치해서 사용할 수 있을 것입니다. 소스 코드를 수정할 수 있는 개발자라면 프로그램 소스 파일을 다운로드 받아 구미에 맞게 기능 개선을 해볼 수 있겠습니다. Offline Browser 로 느려서 답답했던 웹사이트나 해외 사이트를 다운로드 받아서 내 컴퓨터에서 브라우징 해보세요. 학습하거나 참조하기 위해 문서 형식의 웹사이트를 다운로드 받아서 휴대용 PC와 같은 인터넷 연결 없는 시스템에서 브라우징 하기에 유용합니다. 그리고 간단히 나의 블로그를 백업하는 용도로 사용해도 되겠군요.

[HTTrack]
홈페이지 : http://www.httrack.com/
다운로드 : http://www.httrack.com/httrack-3.43.exe

[Free Download Manager]
홈페이지 : http://www.freedownloadmanager.org/
다운로드 : http://files2.freedownloadmanager.org/fdminst3.exe

[BackStreet Browser]
홈페이지 : http://www.spadixbd.com/backstreet/
다운로드 : http://www.convertjunction.com/download/bs.exe

웹프로그래머의 홈페이지 정보 블로그 http://hompy.info

Posted by 프로그래머

2008/11/26 08:46 2008/11/26 08:46

모니터에 위치한 특정 픽셀의 색깔을 추출해서 컬러 값을 알아낼 수 있는 유틸리티 ColorCop 과 ColorPic 입니다. 웹디자인 하시는 분이나 디자인 툴을 다루는 분들에게 유용한 유틸리티이며 그렇지 않은 분들에게도 있으면 도움이 될 것 같습니다.

[Color Cop] http://www.colorcop.net/
중앙 왼쪽에 보이는 스포이트(spuit)를 클릭하고 드래그 해서 색깔을 알아내고 싶은 위치로 마우스 포인트를 옮기고 마우스 버튼을 떼면 색이 추출됩니다.

사용자 삽입 이미지

[ColorPic] http://iconico.com/colorpic/
Chips 에 있는 16개의 박스 또는 칩(Chip) 중에 하나를 마우스로 선택하고 클릭한 후 색깔을 알아내고 싶은 위치로 마우스 포인트를 옮기고 [Ctrl]+[G] 키를 누르면 색이 추출됩니다. 16개의 칩(Chip)중에 하나를 키보드로 선택하려면 [F1], [F2], [F3], [F4], ... 와 같은 펑션키를 누르면 됩니다.

사용자 삽입 이미지


웹프로그래머의 홈페이지 정보 블로그 http://hompy.info

Posted by 프로그래머

2008/11/25 08:52 2008/11/25 08:52
, , , , , , , , ,
Response
No Trackback , 5 Comments
RSS :
http://hompy.info/rss/response/534

네이버 개발자 센터가 어제 11월 22일에 오픈했나봅니다. 오픈 소스와 오픈 API 로 분류 되고 있고 오픈 소스는 아래와 같은 프로젝트 들이 진행되고 있습니다. 오픈 소스 분류를 보면 제로보드로 많이 알려지고 이번에 개명된 Xpress Engine 과 개발자 들에게 어느 정도 알려진 국산 오픈소스 데이터베이스인 큐브리드 를 포함해 협업 개발 플렛폼 nFORGE, 서버 장비 모니터링 툴 Sysmon, 복수 서버 관리 쉘 Dist, 대형 분산 데이타 서버 관리 시스템 neptune, 분산 메모리 기반의 컴퓨팅 플랫폼 Coord, 네이버 카페와 블로그에 사용되고 있는 자바스크립트 WYSIWYG 에디터인 스마트 에디터로 구성되어 있고 앞으로 더 많은 오픈 소스 프로젝트가 추가된다고 하니 반가운 일 입니다. 이제 시작이지만 빠르게 활성화 되서 즐겁고 유익한 개발자들의 놀이터가 되길 바라며 국내 뿐만 아니라 해외 개발자들도 함께 할 수 있는 개발자 센터가 되길 바랍니다.

[큐브리드]
큐브리드는 엔터프라이즈급 오픈 소스 DBMS로서, 인터넷 서비스에 최적화된 DBMS를 지향하고 있습니다. 국내외 6,000 카피 이상의 현장 적용과 지난 2년간 3만건 이상의 제품 다운로드를 통해 미션 크리티컬 응용에서 요구하는 성능, 안정성, 확장성, 가용성을 보장하고 있으며, 제품의 간편한 설치 및 GUI 기반의 클라이언트 툴을 자체 제공함으로써 개발자 접근성 및 관리 편의성을 증대하고 있습니다.

CUBRID 2008은 참여, 개방, 공유의 가치를 기반으로 국내 개발자들과 함께 만들어가는 DBMS가 될 것이며, 특정 벤더에 종속적인 소프트웨어 생태계를 국내 ISV들과 협업하여 재편하고 궁극적으로 국산 DBMS가 많이 사용되는 세상을 만들어 가겠습니다.

[nFORGE]
nFORGE는 소프트웨어 개발에 필요한 기능들을 사용하기 편리하게 웹으로 묶은 협업 개발 플랫폼입니다. 버그나 문제점을 올리고 관리할 수 있는 이슈 트래커, 각종 문서와 정보를 간편하게 공유할 수 있는 위키, 소스코드의 변경내역을 편리하게 관리할 수 있는 형상관리 툴, 일반적인 용도의 게시판, 그리고 최종 작업 결과물을 공유하기 위한 파일 릴리즈 기능 등을 포함하고 있습니다.

nFORGE를 통해서 소프트웨어 개발자들이 개발 효율도 높이고 즐겁게 개발 작업이 이루어질 수 있도록 하고자 합니다. 이곳 네이버 개발자 사이트의 각종 프로젝트들도 모두 nFORGE 위에서 운영되고 있으니 부디 많이 가져다 쓰시고, 사용 도중 발생한 문제점이나 제안사항 등은 언제든지 nFORGE 프로젝트 사이트에 올려 주셔서 부디 nFORGE가 더 좋은 소프트웨어가 되어 함께 나눌 수 있도록 도와 주시면 감사하겠습니다.

[Xpress Engine]
컨텐츠의 생산과 유통을 극대화 하여 웹 생태계의 선순환에 기여하기 위해 Xpress Engine (XE) 오픈 소스 프로젝트는 시작되었습니다. Developer, Navigator, Explorer 라는 3개의 프로젝트 그룹의 멤버들 그리고 사용하고 참여해주시는 많은 분들의 노력과 정성으로 Xpress Engine 는 Content Management System으로 발전하고 있습니다. 그리고 차별없는 웹 세상을 만들기 위해 웹 표준화 / 웹 접근성을 준수하고자 노력하고 있습니다.

보다 쉽고 편하게 글을 작성하고 작성된 글을 잘 보이도록 그리고 잘 사용되도록 함으로서 웹이라는 가상의 공간이 더욱 풍족해지고 쓸만한 곳이 될 수 있도록 많은 분들의 관심과 참여를 원합니다.

[Sysmon]
Sysmon은 대규모 리눅스/윈도우 서버 장비를 모니터링하기 위해 개발된 MySQL 기반의 웹 도구입니다. 5천 대 이상의 서버에 대해 한 대의 마스터 서버로 모니터링이 가능하며, 사용자가 간단한 조회를 통해 쉽게 장비의 종합적인 상태를 파악할 수 있습니다. 또한 다양한 요구사항을 만족시키기 위하여 기본적인 수집 항목 이외에 모니터링 대상 항목과 모니터링 뷰 화면을 사용자가 유연하게 추가, 변경할 수 있도록 설계되었습니다.

Sysmon은 현재 NHN의 서버 운영에 크게 기여하고 있으며 쉬운 설치와 강력한 기능을 바탕으로 점점 사용자 저변을 넓혀가고 있습니다. 보다 많은 사용자가 Sysmon을 접하고 보다 많은 개발자가 오픈소스 개발에 참여함으로써 Sysmon은 한층 더 좋은 소프트웨어로 발전해갈 수 있을 것입니다. 많은 관심과 참여를 부탁 드립니다.

[Dist]
Dist는 다수의 서버를 효율적으로 관리하기 위한 셸 명령어 수행 도구입니다. Dist를 사용하여 마스터 서버로 내려진 명령어는, 여러 서버에 동시 또는 순차적으로 실행되고 그 결과는 정리되어 마스터 서버에서 보여집니다. 각 서버의 상황에 맞는 명령어 커스터마이징, 출력 형태의 변경, 자유로운 명령어 수행, 서버 목록 구성, 향상된 에러 처리 능력 등의 강력한 기능을 가지고 있습니다. Python으로 만들어진 그리 크지 않은 간단한 프로그램이지만, 다수의 시스템을 효과적으로 신속하고 편리하게 운영하는데 큰 도움을 줄 것입니다.

Dist는 NHN 주요 서비스를 담당하는 다수의 서버(unix, linux, 윈도우 서버)를 효과적으로 운영하는데 탁월한 효용성을 보여주고 있습니다. 또한 지속적으로 기능을 추가하여 점점 더 사용하기 편리한 도구가 될 것입니다.

[neptune]
neptune은 수십 ~ 수백대의 분산된 서버에 수십 TB 이상 대규모의 구조화된 데이터를 저장, 서비스하는 데이터 관리 시스템입니다. neptune을 이용하면 실시간 데이터 서비스뿐만 아니라 Hadoop MapReduce와 같은 분산컴퓨팅 플랫폼과 유기적으로 동작하여 쉽고 빠르게 저장된 데이터를 분석할 수 있습니다. 심플한 데이터 모델, 수천대 규모의 확장성, 데이터의 신뢰성, 백업이 필요 없는 스토리지, 자동 복구 기능 등을 neptune에서 경험할 수 있습니다.

인터넷을 통해 수많은 정보가 쉽고 빠르게 생산되고 있지만 생산된 정보를 쉽고 안전하게 담을 수 있는 플랫폼은 많지 않습니다. neptune 프로젝트는 인터넷, 클라우드 컴퓨팅 시대에서 생산되는 무한대의 데이터를 저장하고 서비스하는 플랫폼을 만들기 위해 노력하겠습니다.

[Coord]
Coord는 분산 메모리 기반의 컴퓨팅 플랫폼입니다. 다수의 서버로부터 수집된 물리적 메모리 공간들은 Coord의 거대한 가상 메모리 공간으로 매핑됩니다. 이렇게 구성된 거대한 메모리 공간은 분산 환경을 위한 프로세스간 통신, 자원 공유, 동기화를 위하여 사용되고, 사용자는 단순한 API(read/write/take)만으로 다양한 분산 프로그래밍 모델(client-server, master-worker, scatter-gather, map-reduce)을 별도의 네트워크 관련 지식없이 쉽게 구현할 수 있습니다. 따라서 사용자는 일반적인 목적을 위한 분산 프로그래밍 뿐만 아니라 대규모 계산이나 대용량 데이터 처리를 필요로 하는 분산 프로그래밍에서도 Coord를 유용하게 사용할 수 있습니다.

현재 Coord는 C++을 기반으로 개발되었지만 다양한 프로그래밍 언어(Java, Python, PHP등)들을 함께 지원하고 있습니다. 이것은 기 개발된 데이터/텍스트 마이닝 알고리즘들과 기계학습 알고리즘들이 Coord와 함께 유연하게 연동될 수 있다는 가능성을 보여줍니다. 대부분의 알고리즘들이 단일 서버 환경에서 한정적인 메모리 기반으로 설계되었기 때문에 분산 환경에 적용하는 것이 쉽지만은 않습니다. 그러나 Coord를 사용하면 최소한의 코드 수정만으로 기 개발된 알고리즘들을 그대로 분산 환경에 적용할 수 있습니다. 이를 위하여 대규모 계산 문제나 대용량 데이터 처리에 관심있는 개발자들의 적극적인 참여가 필요합니다. Coord 기반의 분산 컴퓨팅의 세계로 여러분들을 초대합니다.

[스마트 에디터]
스마트 에디터는 Javascript로 구현된 웹 기반의 WYSIWYG 에디터입니다. 스마트 에디터는WYSIWYG 모드 및 HTML 편집 모드 제공, 자유로운 폰트 크기 설정 기능, 줄 간격 설정 기능, 단어 찾기/바꾸기 기능 등 편집에 필요한 다양한 기능을 제공하므로 사용자들은 스마트 에디터를 사용하여 쉽고 편리하게 원하는 글을 작성할 수 있습니다. 그리고 에디터의 구조가 쉽게 기능을 추가할 수 있는 구조로 되어 있어 스마트 에디터에 기능을 추가하기 위해서는 정해진 규칙에 따라 플러그 인을 만들어 추가하기만 하면 됩니다.

현재 스마트 에디터는 네이버, 한게임 등 NHN의 주요 서비스에 적용되어 있습니다. 그리고 Internet Explorer 6 이상, FireFox 2 이상, Safari 3, Opera 9, Chrome등 다양한 브라우저를 지원하고 있으며 지속적으로 지원 대상 브라우저를 넓혀 갈 예정입니다. 또한 지속적인 기능 추가를 통해 편리하고 강력한 에디터로 거듭날 것입니다.

  • 공개 준비중
웹프로그래머의 홈페이지 정보 블로그 http://hompy.info

Posted by 프로그래머

2008/11/23 11:33 2008/11/23 11:33

무료라지만 유료로 써도 아깝지 않을 만큼 잘 만든 프리웨어 소프트웨어들이 생각보다 많고 이런 소프트웨어들만 가지고 내 PC를 토핑해도 불편하지 않을 정도입니다. 물론 새로운 소프트웨어의 사용법을 익히고 내 것으로 만드는 데 시간이 필요하긴 합니다. 무료 소프트웨어만으로 내 PC를 꾸미고 불편함 없이 PC를 사용하는 것이 가능할까요? 도전해 볼 만한 일입니다. 프리웨어 소프트웨어 중에는 소스 코드 또한 오픈 되어 있어 소프트웨어 개발에 관련된 분들에게 유익한 학습 자료가 될 수 있습니다.
아래 나열한 프리웨어 소프트웨어 리스트는 프로그램 이름, 홈페이지 주소, 프로그램 다운로드 링크 순으로 배치 되었습니다.

GIMP - 사진보정/이미지 저작툴
   http://www.gimp.org/
   http://downloads.sourceforge.net/gimp-win/gimp-2.6.2-i686-setup.exe
3D Box Shot Maker - 소프트웨어 상자 표지 디자인툴
   http://www.bosseye.com/boxshot/
   http://www.bosseye.com/dlfree/boxshot_setup.exe
7-Zip - 압축률이 높은 파일 압축 관리툴
   http://www.7-zip.org/
   http://downloads.sourceforge.net/sevenzip/7z457.exe
바닥 - 동영상 변환 유틸리티
   http://www.kipple.pe.kr/doc/badak/
   http://www.kipple.pe.kr/doc/badak/badak20081013.exe
빵집 - 국산 무료 파일 압축 관리툴
   http://www.bkyang.com/
   http://www.bkyang.com/download/bz3setup.exe
칼무리 - 국산 화면,웹페이지 캡쳐툴
   http://kalmuri.kilho.net/
   http://www.kilho.net/bbs/download.php?bo_table=temppds&wr_id=270&KaMuRi.exe
AbiWord - 워드 프로세서, 문서 편집툴
   http://www.abisource.com/
   http://www.abisource.com/download/
Alch Icon Suite - 윈도우즈 아이콘 추출, 편집툴
   http://alch.info
   http://www.brothersoft.com/alch-icon-suite-download-94853.html
AmitySource Userbar Generator - 가로형 막대 배너 제작툴
   http://www.amitysource.com/userbar_maker.php
   http://www.amitysource.com/distfiles/UBarGen2.2_en.exe
AnimPixels - 픽셀/비트맵 애니메이션 제작툴
   http://www.animpixels.com/
   http://www.animpixels.com/download.html
Aptana - 웹개발툴 IDE
   http://www.aptana.com/
   http://www.aptana.com/studio/download
Aqua Data Studio - 데이타베이스 관리툴 IDE
   http://www.aquafold.com/
   http://www.aquafold.com/downloads.html
Audacity - 사운드 녹음, 믹싱, 편집의 종합툴
   http://audacity.sourceforge.net/
   http://audacity.sourceforge.net/download/windows
Blender - 3D 애니매니션 툴
   http://www.blender.org/
   http://www.blender.org/download/get-blender/
   http://download.blender.org/release/Blender2.48a/blender-2.48a-windows.exe
CamStudio - 동영상 캡쳐 툴
   http://camstudio.org/
   http://www.camstudio.org/CamStudio20.exe
Code::Blocks - C++ 개발 툴 IDE
   http://www.codeblocks.org/
   http://www.codeblocks.org/downloads
Color Cop - 색상 추출 툴
   http://www.colorcop.net/
   http://www.colorcop.net/download
   http://www.colorcop.net/re/?download_colorcop
ColorPic - 색상 추출 툴
   http://iconico.com/colorpic/
   http://iconico.com/download.aspx?app=ColorPic
CPU-Z - PC 시스템 정보 조회 툴
   http://www.cpuid.com/cpuz.php
   http://www.cpuid.com/download/cpuz_148.zip
CutePDF Writer - 문서를 PDF 파일로 출력 및 변환 해주는 툴
   http://www.cutepdf.com/Products/CutePDF/writer.asp
   http://www.cutepdf.com/download/CuteWriter.exe
Cygwin - 윈도우즈에서 유닉스 환경 시뮬레이션 해주는 툴
   http://www.cygwin.com/
   http://www.cygwin.com/setup.exe
Digital Image Tool - 이미지 관리 및 변환 툴
   http://www.digitalimagetool.com/
   http://www.digitalimagetool.com/tank/digitalimagetoolinstaller1.3.exe
DrawPlus - 벡터 그래픽 애니메이션 제작 툴
   http://www.freeserifsoftware.com/software/DrawPlus/
   http://www.freeserifsoftware.com/commence-download.asp?CommenceDownload=drawplus&navproduct=drawplus&Check=True
DTaskManager - 윈도우즈 작업 관리자
   http://dimio.altervista.org/eng/
   http://dimio.altervista.org/stats/download.php?id=4
Easy Thumbnails - 이미지 섬네일 제작 및 변환 툴
   http://www.fookes.com/ezthumbs/
   http://www.fookes.com/ftp/free/EzThmb_Setup.exe
Eclipse - 통합 개발 환경 IDE
   http://www.eclipse.org/
   http://www.eclipse.org/downloads/
eMule - P2P 파일 공유 툴
   http://www.emule-project.net/
   http://www.emule-project.net/home/perl/general.cgi?l=1&rm=download
FastStone Image Viewer - 이미지 뷰어
   http://www.faststone.org/FSViewerDetail.htm
   http://www.faststone.org/FSViewerDownload.htm
FastStone Photo Resizer - 이미지 변환 툴
   http://www.faststone.org/FSResizerDetail.htm
   http://www.faststone.org/FSResizerDownload.htm
   http://www.faststone.org/DN/FSResizerSetup27.exe
Fedora - 리눅스 배포판
   http://fedoraproject.org/
   http://fedoraproject.org/en/get-fedora
Filezilla - 파일 FTP 전송 툴
   http://sourceforge.net/projects/filezilla/
   http://sourceforge.net/project/showfiles.php?group_id=21558
Firefox - 웹브라우져
   http://www.mozilla.or.kr/ko/products/firefox/
Flash Slide Show Maker Professional - 플래시 앨범을 슬라이드 쇼로 만들어주는 툴
   http://www.flash-slideshow-maker.com/
   http://www.flash-slideshow-maker.com/setup_flash_slideshow_maker.exe
FontHit Font Tools - 윈도우즈 글꼴/폰트 관리 툴
   http://www.download.com/FontHit-Font-Tools/3000-2316_4-10332216.html
   http://downloads.zdnet.com/abstract.aspx?docid=204056
FontInfo - 윈도우즈 글꼴/폰트 관리 툴
   http://www.xlmsoft.de/fontinfo.php
   http://www.xlmsoft.de/download/fisetup.exe
FontRenamer - 폰트에 사용된 이름들을 변경해주는 툴
   http://www.neuber.com/free/fontrenamer/
   http://www.webattack.com/dlnow/rdir.dll?id=105070
Foobar2000 - 음악/사운드 재생 툴
   http://www.foobar2000.org/
   http://www.foobar2000.org/?page=Download
FreeCommander - 윈도 탐색기 대체용 파일관리 툴
   http://www.freecommander.com/
   http://www.freecommander.com/fc_downl_en.htm
FreeMind - 무료 마인트맵 소프트웨어
   http://freemind.sourceforge.net/
   http://freemind.sourceforge.net/wiki/index.php/Download
Gadwin PrintScreen - 화면 캡쳐 툴
   http://www.gadwin.com/printscreen/
   http://www.gadwin.com/download/
GIMPShop - GIMP 를 포토샵 처럼 꾸며주는 툴
   http://thegimpshop.net/
   http://www.computerdefense.org/gimpshop/gimpshop_2.2.8_fix1_setup.exe
Google Chrome - 구글 웹브라우져
   http://www.google.com/chrome/
   http://www.google.com/chrome/eula.html
Google Docs - MS 오피스를 대체할 수 있는 구글 오피스 서비스
   http://docs.google.com/
Google Talk - 구글 메신져
   http://www.google.com/talk/intl/ko/
Handbrake - 강력한 DVD 리핑툴
   http://handbrake.fr/
   http://handbrake.fr/?article=download
HoverIP - 네트워크 IP 관리 툴
   http://www.hoverdesk.net/freeware.htm
   http://www.hoverdesk.net/dl/en/HoverIP.zip
ImageDiff - 픽셀단위의 이미지 비교 툴
   http://www.ionforge.com/products/
   http://www.ionforge.com/products/imagediff/imagediff.zip
ImageMagick - 명령어 방식으로 사용하는 그래픽 툴
   http://www.imagemagick.org/script/
   http://www.imagemagick.org/script/download.php
Inkscape - 일러스트레이터와 유사한 벡터 그래픽 편집기
   http://www.inkscape.org/
   http://downloads.sourceforge.net/inkscape/Inkscape-0.46.win32.exe
InsightPoint - 벡터 그래픽 편집기
   http://www.icytec.com/
   http://www.icytec.com/insightpoint/download/insightpoint-3.2.5.2-win.exe
IOBit Smart Defrag - 하드디스크의 조각모음 최적화 툴
   http://www.iobit.com/iobitsmartdefrag.html
   http://www.download.com/Smart-Defrag/3000-2094-10759533.html?part=dl-SmartDefr&subj=uo&tag=button
IrfanView - 이미지 뷰어 및 사운드, 동영상 재생기
   http://www.irfanview.com/
   http://www.irfanview.com/main_download_engl.htm
Jahshanka - 디자인 환경 통합 관리 툴
   http://jahshaka.org/
   http://jahshaka.org/Downloads
jEdit - 오픈소스 텍스트 에디터
   http://www.jedit.org/
   http://www.jedit.org/index.php?page=download
JR Screen Ruler - 윈도우즈 화면 길이/높이 측정 툴
   http://www.spadixbd.com/freetools/jruler.htm
   http://www.convertjunction.com/download/jruler.zip
Juice - 음악 재생 툴
   http://juicereceiver.sourceforge.net/
   http://prdownloads.sourceforge.net/juicereceiver/Juice22Setup.exe?download
KompoZer - 홈페이지 제작 공개 웹에디터
   http://kompozer.net/
   http://kompozer.net/download.php
LiveSwif Lite - 플래시 애니메이션 제작 툴
   http://www.download.com/LiveSwif-Lite/3000-6676_4-10276527.html
   http://www.zdnet.com.au/downloads/0,139024478,10276528s,00.htm
LOOXIS Faceworx - 2차원 사진을 이용해 3차원 얼굴 사진을 제작 하는 툴
   http://www.looxis.com/
   http://www.looxis.com/en/k75.Downloads_Bits-and-Bytes-to-download.htm
   http://www.looxis.com/download/LOOXIS_Faceworx_v1.exe
LS Screen Capture - 화면 캡쳐 툴
   http://www.linos-software.com/capture.html
   http://www.linos-software.com/FileDownload/SetupCapture.msi
Magnifier - 데스크탑 돋보기 화면 확대, 부분 캡쳐 툴
   http://iconico.com/magnifier/
   http://iconico.com/download.aspx?app=Magnifier
Media Coder - 오디오 및 비디오 일괄 변환 툴
   http://mediacoder.sourceforge.net/
   http://mediacoder.sourceforge.net/download.htm
Media Player Classic - 동영상 재생 툴
   http://sourceforge.net/projects/guliverkli/
   http://sourceforge.net/project/showfiles.php?group_id=82303
MPlayer - 동영상 재생 툴
   http://www.mplayerhq.hu/
   http://www.mplayerhq.hu/MPlayer/releases/win32/MPlayer-1.0rc2-gui.zip
NetBeans - 자바 통합 개발 환경 IDE
   http://www.netbeans.org/
   http://www.netbeans.org/downloads/index.html
Notepad++ - 프로그래밍 소스 편집기
   http://notepad-plus.sourceforge.net/
   http://notepad-plus.sourceforge.net/uk/download.php
Notepad2 - 텍스트 편집기
   http://www.flos-freeware.ch/notepad2.html
   http://www.flos-freeware.ch/zip/notepad2.zip
nPOP - POP3 이메일 클라이언트 툴
   http://www.nakka.com/soft/npop/index_eng.html
   http://www.nakka.com/soft/npop/download/npop109win32_EN.zip
ntop - 네트워크 모니터링 툴
   http://www.ntop.org/
   http://www.ntop.org/download.html
Open Capture - 화면 캡쳐 툴
   http://openproject.nazzim.net/opencapture.htm
   http://openproject.nazzim.net/download/opencapture_v1.3.5.exe
OpenOffice.org - MS 오피스 대체할 수 있는 오픈 오피스 툴
   http://www.openoffice.org/
   http://openoffice.or.kr/main/page.php?id=download
   ftp://ftp.daum.net/openoffice/localized/ko/3.0.0/OOo_3.0.0_Win32Intel_install_ko.exe
openSUSE - 노벨의 리눅스 배포판
   http://www.opensuse.org/en/
   http://software.opensuse.org/
Opera - 웹브라우져
   http://www.opera.com/
   http://www.opera.com/download/get.pl?id=31867&thanks=true&sub=true
Paint.NET - 그래픽 편집 툴
   http://www.getpaint.net/
   http://www.dotpdn.com/files/Paint.NET.3.36.zip
   http://fileforum.betanews.com/download/PaintNET/1096481993/1
ParticleDraw - 추상적인 이미지를 그려주는 툴
   http://www.anandavala.info/TASTMOTNOR/code/v5.1/ParticleDraw/SMNPD5_1.html
   http://www.anandavala.info/TASTMOTNOR/code/v5.1/ParticleDraw/SMN_v5.1_ParticleDraw.zip
PC Wizard - 하드웨어, 시스템 정보 제공 툴
   http://www.cpuid.com/pcwizard.php
   http://www.cpuid.com/download/pcw2008_v187.exe
photoWORKS - 디지털 이미지 수정 툴
   http://andojung.com/photoWORKS/
Picasa - 이미지 관리 툴
   http://picasa.google.com/
   http://picasa.google.com/picasa_thanks.html
PicPick - 캡쳐, 눈금자, 각도기, 색상추출 그래픽 툴
   http://picpick.wiziple.net/
   http://picpick.wiziple.net/features
Pidgin - 인스턴트 메신저
   http://www.pidgin.im/
   http://www.pidgin.im/download/
PrimoPDF - 인쇄물을 PDF 파일로 변환 해주는 툴
   http://www.primopdf.com/
   http://www.download.com/PrimoPDF/3000-10743_4-10264577.html?part=dl-10264577&subj=dl&tag=button
Project Dogwaffle - 애니메이션 제작이 가능한 그래픽 툴
   http://www.thebest3d.com/dogwaffle/free/
   http://www.thebest3d.com/dogwaffle/free/Dogwaffle_Install_1_2_free.exe
PuTTY - 텔넷/SSH 접속 툴
   http://www.chiark.greenend.org.uk/~sgtatham/putty/
   http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
Safari - 애플 웹브라우져
   http://www.apple.com/kr/safari/
   http://www.apple.com/kr/safari/download/
ScreenHunter - 화면 캡쳐 툴
   http://wisdom-soft.com/products/screenhunter_free.htm
   http://www.wisdom-soft.com/downloads/setupscreenhunterfree.exe
SharpDevelop - C#, VB.NET, Boo 프로젝트를 위한 개발 툴 IDE
   http://www.sharpdevelop.net/OpenSource/SD/
   http://www.sharpdevelop.net/OpenSource/SD/Download/
SketchUp - 3D 그래픽툴
   http://sketchup.google.com/
   http://sketchup.google.com/download/
SmoothDraw NX - freehand 드로잉/스케치 그래픽 툴
   http://www.smoothdraw.com/product/freeware.htm
   http://www.smoothdraw.com/sdnx/SmoothDrawNXSetup.exe
Sothink SWF Catcher - 플래시 파일 추출/저장 툴
   http://www.sothink.com/product/swfcatcher/ie/
   http://www2.sothink.com/download/swfcatcher_IE.zip
SPRAY vector generator - Vector sceleton creation tool, 이미지 제작 툴
   http://xaraxtv.at.tut.by/
   http://www.download.com/3001-2191_4-10623896.html
Spread32 - 스프레드시트
   http://www.byedesign.co.uk/
   http://www.byedesign.co.uk/s32/spre32en.zip
SQLTools - 오라클 관리 툴
   http://www.sqltools.net/
   http://www.sqltools.net/downloads.html
SQLTools for ORACLE - 오라클 관리 툴
   http://heiya.webice.kr/
   http://heiya.webice.kr/PDS/SQLTools.zip
SQuirreL SQL Client - 자바 기반 데이타베이스 관리 툴
   http://www.squirrelsql.org/
   http://www.squirrelsql.org/#installation
Starter - 시작 프로그램, 프로세서, 서비스 관리 툴
   http://codestuff.tripod.com/products_starter.html
   http://codestuff.tripod.com/Starter_English.zip
Stickies - 포스트잇, 윈도우즈 메모장 툴
   http://www.zhornsoftware.co.uk/stickies/
   http://www.zhornsoftware.co.uk/stickies/download.html
Synfig Studio - 2D 애니메이션 툴
   http://www.synfig.com/
   http://www.synfig.com/download/
Tail for Win32 - 로그 파일 모니터링 툴
   http://tailforwin32.sourceforge.net/
   http://sourceforge.net/projects/tailforwin32
Terragen - 지형 제작 툴
   http://www.planetside.co.uk/terragen/
   http://www.planetside.co.uk/terragen/win/downloadwin.shtml
The Font Thing - 윈도우즈 트루타입 폰트 관리 툴
   http://members.ozemail.com.au/~scef/tft.html
   http://members.ozemail.com.au/~scef/tft/tftdownloadmain.html
The KMPlayer - 동영상 재생 툴
   http://www.kmplayer.com/
   http://www.kmplayer.com/forums/showthread.php?t=4094
Thunderbird - 이메일 클라이언트 프로그램
   http://www.mozilla.com/en-US/thunderbird/
   http://www.mozilla.com/products/download.html?product=thunderbird-2.0.0.18&os=win&lang=ko
Tomahawk PDF+ - PDF 파일 생성 툴
   http://www.nativewinds.montana.com/software/tpdfplus.html
   http://www.nativewinds.montana.com/downloads/TPDFPlus.zip
TortoiseSVN - SVN 클라이언트 툴
   http://tortoisesvn.tigris.org/
   http://tortoisesvn.net/downloads
TouchArt Sampler - 3D 아트워크 조절 및 실행 툴
   http://www.derivativeinc.com/TouchArt/TouchArtSampler.asp
   http://www.derivativeinc.com/Temp/TouchArtSampler017.exe
Ubuntu - 리눅스 배포판
   http://www.ubuntu.com/
   http://www.ubuntu.com/getubuntu/download
uMark Lite - 워터마크 삽입 툴
   http://www.uconomix.com/Products/uMark/
   http://www.uconomix.com/Downloads.aspx
UnFREEz - 실행창 닫기, 컴퓨터 종료 툴
   http://www.whitsoftdev.com/unfreez/
   http://www.whitsoftdev.com/files/unfreez.zip
uTorrent - P2P 파일 공유 툴
   http://www.utorrent.com/
   http://www.utorrent.com/download.php
VirtualDub - 동영상 편집 및 형식 변환 툴
   http://www.virtualdub.org/
   http://virtualdub.sourceforge.net/
VirtuaWin - 윈도우즈 가상 데스크탑 관리 툴
   http://virtuawin.sourceforge.net/
   http://virtuawin.sourceforge.net/downloads.php
VLC Media Player - 동영상 재생 툴
   http://www.videolan.org/vlc/
   http://www.videolan.org/vlc/download-windows.html
Winamp - 음악 파일 재생 툴
   http://www.winamp.com/
   http://www.winamp.com/player
WinTail - 로그 파일 모니터링 툴
   http://www.baremetalsoft.com/wintail/
   http://www.baremetalsoft.com/wintail/download.php?p=p
X-Chat - IRC 채팅 툴
   http://silverex.info/
   http://silverex.info/download/
Xnews - 뉴스리더 툴
   http://xnews.newsguy.com/
   http://xnews.newsguy.com/#download
XnView - 이미지 관리 툴
   http://pagesperso-orange.fr/pierre.g/xnview/enxnview.html
   http://pagesperso-orange.fr/pierre.g/xnview/endownload.html

웹프로그래머의 홈페이지 정보 블로그 http://hompy.info

Posted by 프로그래머

2008/11/22 14:02 2008/11/22 14:02

몇일전 위자드웍스에 의해 런칭한 위자드 팩토리에 있는 이쁜 위젯들로 블로그를 토핑해보면 어떨까요? 막 오픈한 서비스라서 선택할 수 있는 위젯의 폭이 아직은 좁은 편이나 좀더 쉽고 좀더 간편하게 사용할 수 있도록 구성한 유저 인터페이스와 아기자기한 디자인이 돋보입니다. 유저가 직접 위젯을 제작할 수 있도록 이미 공개되었고 이번에 추가 버전업 된 오픈API도 제공하고 있으니 유저가 손수 만들어서 올릴 수 있습니다. 이렇게 만들어진 공개된 위젯은 위자드 팩토리에 의해 다양한 채널로 배포될 수 있습니다. 앞으로 퍼가고 싶은 다양하고 개성있는 위젯들이 얼마나 많이 진열될 것이냐에 따라 위젯공장의 역할을 할 수 있을지 여부가 결정될 것 같습니다. 위젯공장에 있는 눈에 띄는 시계,날씨,아기 위젯과 이번에 추천 블로그 선정으로 위젯공장에 등록된 제 블로그 RSS피드 위젯을 띄워봅니다.

웹프로그래머의 홈페이지 정보 블로그 http://hompy.info

Posted by 프로그래머

2008/10/13 23:49 2008/10/13 23:49

기자나 블로거들이 뉴스 기사 소재, 블로깅 소스를 찾을 수 있는 뉴스와이어라는 홈페이지가 있습니다. 2004년에 오픈한 이 서비스를 이용하는 기자분들이 많은 것으로 알고 있고 또한 기업이나 정부,기관,단체 입장에서 보면 홍보할 수 있는 채널로 활용될 수 있어 홍보나 마케팅에 관여하는 분들이 이 서비스를 많이 이용하고 있기도 합니다. 네이버 백과사전으로 검색해 보면 아래와 같은 소개가 나오는군요.

[네이버 백과사전 뉴스와이어 검색 결과]
업종 통신사
설립자 코리아뉴스와이어(주)
설립일 2004년 7월

정식명칭은 코리아뉴스와이어이다. 기업·정부·기관·단체 등이 발표하는 보도자료를 체계적으로 수집·분류해 언론사에 제공하는 온라인 통신사이다. 2004년 7월 설립된 코리아뉴스와이어(주)가 같은 해 8월 12일부터 서비스를 시작하였다.

기업이나 기관의 홍보인이 보도자료와 관련해 수많은 언론매체를 상대해야 하는 데 따르는 여러 가지 불편을 해소하고, 언론사로 하여금 뉴스에 필요한 보도자료에 자유자재로 접근하게 함으로써 지식 기반 저널리즘의 발전에 밑거름 역할을 하는 데 목적이 있다.

이를 위해 2006년 7월 현재 데이터베이스화한 국내 주요 기업과 정부 부처 1500여 개 기업과 기관의 보도자료 및 사진 등의 정보만 16만여 건이 된다. 언론인은 이 데이터베이스를 풀텍스트로 검색할 수 있다. 또 언론인의 전문화 추세에 따라 세밀한 뉴스 분류 체계를 갖추고, 각 분야 담당기자에게 맞춤형 보도자료도 제공한다.

주요 서비스 분야는 금융·부동산·중화학·자동차·전자통신·미디어·유통·생활·건강과학·교육·정치·정부·교육·문화연예·레저 등 14개이다. 언론인 회원에게 하루 2회 오늘의 보도자료를 제공하고 있다.

그밖에 언론인 회원이 특정 업종의 뉴스와이어 홍보인 회원들에게 자신이 기획·취재 중인 주제에 대하여 정보 제공을 요청하는 동보메일을 보내 다양한 정보를 수집할 수 있게 해주는 주문형 보도자료 서비스인 QA넷 서비스를 제공할 예정이다.

4년이 넘게 축적되어 온 검색할 수 있는 보도자료의 분량이 방대하며 매일 새롭게 발행되는 보도자료들을 통해 기업의 투자 정보, 기업의 신상품 정보, 기업의 새로운 서비스 런칭 소식 등을 뉴스 기사보다 빠르게 보도자료를 통해 접할 수 있는 장점이 있는 서비스입니다. 언론인 회원에게는 매일 이메일을 통해 개별 전송해 주는 서비스를 통해 편의를 제공하고 있고 최근 개편으로 서비스의 품질이 향상 되었으며 특히 사진 검색 서비스는 기자들에게 매우 편리해졌다며 애용되고 있는 것 같습니다. 기업이나 기관의 홍보 담당자나 기자들이라면 필수로 이용해야 하는 서비스가 아닌가 싶네요. 또한 뉴미디어로 조명을 받고 있는 블로그를 운영하고 있는 블로거들도 블로깅 소스로 활용하면 좋을 것 같습니다.

웹프로그래머의 홈페이지 정보 블로그 http://hompy.info

Posted by 프로그래머

2008/10/12 17:12 2008/10/12 17:12

RSS 피드와 위자드닷컴 마이젯을 이용한 위젯입니다. RSS 피드를 이용해서 간단하게 위젯을 만들기에는 좋은 것 같습니다.

전체 행사 RSS ==▶ http://www.linknow.kr/rss/event/list/recent
인기 행사 RSS ==▶ http://www.linknow.kr/rss/event/list/best
블로거 클럽 최신 게시물 RSS ==▶ http://www.linknow.kr/rss/group/blog/recent
블로거 클럽 최근 방문자 RSS ==▶ http://www.linknow.kr/rss/group/blog/visit
플래시 카페 최신 게시물 RSS ==▶ http://www.linknow.kr/rss/group/flash/recent
플래시 카페 최근 방문자 RSS ==▶ http://www.linknow.kr/rss/group/flash/visit

Posted by 프로그래머

2008/09/19 14:21 2008/09/19 14:21

"전문직 직장인들은 어떤 이메일을 대표 이메일로 사용하고 있을까?"라는 질문에 대한 의미있는 통계가 있습니다. 직장인 4만7천명을 대상으로 이메일 사용을 분석한 결과 다양한 포털에서 제공하는 웹메일을 개설하고 있지만 주로 사용하는 이메일은 한 두개 정도이며 대표 이메일의 판도가 지각변동의 조짐이 있는 것으로 파악되었습니다. 이메일의 대표주자 였던 다음 한메일을 제치고 네이버 메일이 선두를 달리고 있으며 구글의 지메일과 KTH의 파란 메일이 점유율을 빠르게 높이고 있습니다.

웹프로그래머의 홈페이지 정보 블로그 http://hompy.info


[보도 자료] http://www.newswire.co.kr/?job=news&no=359712

직장인들 웹메일 대거 바꿨다...네이버, 다음 제치고 1위…네이트, 지메일 급부상

(서울=뉴스와이어) 2008년 09월 17일 -- 이메일의 대명사였던 다음의 한메일이 네이버 메일에 1위 자리를 내주고, 네이트와 구글의 지메일이 급부상하는 등 직장인들 사이에서 웹메일 서비스 이용에 큰 판도 변화가 일어나고 있는 것으로 조사됐다.

비즈니스용 인맥 구축 서비스인 링크나우(www.linknow.kr)는 회원 4만7천명의 이메일 사용 현황을 조사해 17일 발표했다.

이 조사 결과에 따르면 링크나우에서 웹메일을 기본메일로 사용하는 회원 가운데 naver 메일의 점유율은 26.1%로, hanmail과 daum을 합친 다음 메일의 점유율(24.2%)보다 높은 것으로 나타났다.
 
이어 3위는 nate로 11.8%였으며, 4위인 gmail은 9.3%, 5위 hotmail/msn은 7.2%, 6위 paran은 5.6%, 7위 empas는 5.0%, 8위 yahoo는 4.0%, 9위 korea.com은 2.3%, 10위 dreamwiz는 1.9%, 11위 lycos는 1.3%, 12위 chollian 1.2%의 점유율을 보였다.

이같은 순위는 2000년대 전반까지 다음의 메일이 전체 웹메일 서비스에서 50% 정도의 점유율을 차지하고, 이어서 엠파스 메일이 약 20% 수준으로 2위를 달리던 과거의 판도와는 큰 차이를 보이는 것이다.
 
링크나우는 다른 포털과 달리 이메일을 아이디로 사용하고 있고, 회원이 5개까지 메일을 등록해놓고 기본메일 설정을 통해 사용하는 메일을 바꿔가면서 인맥을 구축할 수 있다. 따라서 여러 종류의 웹메일에 중복 가입한 경우라 하더라도 회원이 이 가운데 어떤 메일을 주로 쓰고 있는지 쉽게 알 수 있다. 또한 링크나우는 서비스를 론칭한 지 1년밖에 안돼 직장인과 전문직 종사자들의 최근 메일 사용패턴을 잘 반영하고 있다.

직장인들 사이에 네이버 메일 사용자가 다음의 메일 사용자를 능가한 것은 검색과 블로그, 초기페이지 설정에서 네이버가 절대 강자가 된 것과 관련이 깊은 것으로 풀이된다.
 
또한 네이트 메일의 이용 점유율 확대는 다른 서비스와는 차별화된 메신저 및 휴대폰 문자 메시지 연계 서비스가 어필한 결과로 추정된다.
 
지난해 초 국내에서 이메일 서비스를 개방한 구글의 지메일이 불과 1년여만에 국내 포털과 필적할 만큼 무서운 상승세를 보인 것은 기가바이트의 대용량 저장공간을 제공하고, 검색 등 다른 서비스와 연계하면서 전문직을 파고 든 전략이 효과를 발휘한 것으로 보인다.
 
KTH의 파란 메일도 대용량 서비스 제공에 힘입어 점유율이 상승한 것으로 나타났다. 반면 MS, 엠파스, 야후, 코리아닷컴, 드림위즈의 메일 서비스는 밀리거나 답보 상태인 것으로 나타났다.
 
링크나우 신동호 대표는 “종합포털 서비스에서 메일은 검색, 커뮤니티, 블로그 같은 기본 서비스와 함께 회원의 포털 재방문을 결정하는 핵심 요소 가운데 하나이다”며 “직장인들 사이에서 일어나고 있는 메일 서비스 점유율의 변화가 앞으로 포털 전체의 판도 변화에도 큰 영향을 줄 것으로 보인다”고 밝혔다.
 
링크나우(www.linknow.kr)는 18세 이상 성인이 프로필을 통해 자신의 경력과 전문성을 알리고 필요한 사람과 인맥을 연결할 수 있는 비즈니스용 소셜 네트워킹 서비스(SNS=Social Networking Service)로, 3촌(친구의 친구의 친구)까지 찾을 수 있는 강력한 '인맥검색엔진'을 통해 회원이 순식간에 방대한 인맥을 구축할 수 있게 해준다.
위키넷 소개: 링크나우(www.linknow.kr)는 (주)위키넷이 운영하는 비즈니스용 소셜 네트워킹 서비스(SNS=Social Networking Service)이다. 링크나우 회원은 4만5천명이며 주로 CEO, 직장인, 전문직 종사자들이 인맥을 구축하는데 이 서비스를 이용하고 있다. 링크나우는 친구의 친구의 친구 즉 3촌까지 인맥을 검색할 수 있는 강력한 인맥검색엔진을 갖고 있다. 또한 대학 동창 찾기, 직장 동료 검색 기능과 함께 인물 추천, 인물 소개, 그룹, 행사 기능 등을 갖고 있다.
출처: 위키넷

[관련 기사]
데이터뉴스 - 직장인 웹메일 점유율…네이버 1위
동아일보 - 네이버 메일 한메일 추월
K모바일 - 직장인들 웹메일 대거 바꿨다
아이뉴스24 - 네이버메일, 한메일 제쳤다…링크나우
노컷뉴스 - 웹메일 지존 '한메일' 네이버에 1위 내줬다?
한국경제TV - 네이버 메일, 한메일 제쳤다
아시아경제 - 다음, 메일 1위 네이버에 내줬다

Posted by 프로그래머

2008/09/19 08:44 2008/09/19 08:44
, , , , , , , , ,
Response
A trackback , 3 Comments
RSS :
http://hompy.info/rss/response/497

이번에 링크나우 프로필 버튼에 새로운 디자인이 추가 되었으며 추가된 디자인을 포함해서 전체 프로필 버튼을 열거하며 프로필 버튼을 블로그에 부착해서 자신을 보다 손쉽게 알리는 방법으로 유용하게 사용하고 있는 사례도 열거합니다. 적용 사례들을 요약해보면 일반적으로 블로그의 사이드바에 프로필 버튼을 부착해서 사용하는 경우가 많으며 이메일을 보낼때 서명으로 사용하기도 하고 카페의 메인화면에 주인장을 소개하는 용도로도 사용하고 기업의 홈페이지에서 담당자를 소개하는 용도로도 쓰여지기도 합니다. 때로는 카페나 블로그 게시물에 게시물을 신뢰도를 높이기 위해 게시물 하단에 덧붙여서 사용하는 경우도 있습니다. 궁극적으로 프로필 버튼을 동반한 컨텐트의 신뢰도를 높이고 이런 활용을 통해 관련 인맥을 확장 하려는 것이 대체적인 목표입니다. 프로필 버튼은 디자인이 가능한 개인이 자신의 취향에 맞게 디자인 하는 경우도 있으므로 개성있는 프로필 버튼을 직접 디자인해보는 것도 의미 있는 일이겠습니다.

웹프로그래머의 홈페이지 정보 블로그 http://hompy.info

[프로필 버튼 리스트]

[프로필 버튼을 이용하고 있는 블로그 리스트]
http://makeceo.com - 2010년 나는 CEO
http://kini.tistory.com - kini's Sportugese
http://junycap.com/blog - Interactive Dialogue and PR 2.0
http://planspace.tistory.com - 기획전문가(세상을 기획하는 남자)
http://hunking.tistory.com - 자기혁신연구소
http://choikorean.tistory.com - We Are The STAR.
http://ikejo.tistory.com - I am.
http://evermore.pe.kr/tc - Evermore Blog
http://farmhouse.tistory.com - 즐거운 전원생활
http://1000sk.tistory.com - 춘래불사춘
http://hwanyc.tistory.com - hwanyc's disaffection market
http://acrobat.egloos.com - PASS THE MIC
http://antop.pe.kr/tc - 머리속의 한계를 대신하는 저장소
http://beautifulos.blogspot.com - 아름다운 OS 솔라리스
http://boan.tistory.com - 엔시스의 정보보호 따라잡기
http://cityguy.tistory.com - Dream's Come True... 2008!
http://flasher0420.cafe24.com/zbxe/blog - KimHeoungJin Blog
http://koon.tistory.com - KOON's Blackhole
http://manwol.kr - 말랑말랑
http://neojjang.egloos.com - 살다보면...
http://nicehwan.net - Nicehwan™s Beautiful world
http://paro85.tistory.com - 그때 너는 붉었다..
http://pweb.tistory.com - 파라오의 웹마케팅
http://sexygony.com - 섹시고니(sexygony)의 세상 비틀기
http://shinhwanoh.blogspot.com - cmdesign's Blogger station
http://subby.co.kr - 서비나라의 세상사는 이야기
http://weceo.kr - 창조코리아
http://www.medicaltourisminkorea.com/nhkee - Medical Tourism
http://www.sis.pe.kr - 엔시스의 정보보호 따라잡기
http://www.webnbizr.com - Web N Bizr

Posted by 프로그래머

2008/09/15 10:52 2008/09/15 10:52

어도비 플래시를 설치하면 기본으로 플래시 플레이어가 깔립니다. 그런데 네비게이션이 불편하다 보니 플래시를 전진, 후진 시킬수 있는 네비게이션이 있는 플레이어가 필요하게 되어 검색을 통해 찾아보니 무료로 쓸만한 네비게이션이 있는 플래시(SWF)와  FLV 동영상 플레이어가 있어 소개합니다. 더 좋은 플래시 플레이어가 있을 수 있지만 이정도면 쓸만한 것 같습니다. 아래 프로그램 중 특히 swf_flv_player 가 여러모로 좋아 보입니다.^^
사용법은 간단하니 직접 설치해서 체험해보세요.

  swf_flv_player.exe

사용자 삽입 이미지


flash_movie_player.exe

사용자 삽입 이미지

웹프로그래머의 홈페이지 정보 블로그 http://hompy.info

Posted by 프로그래머

2008/09/05 08:28 2008/09/05 08:28

몇일전 태터툴즈를 설치해서 사용하던 블로그를 텍스트큐브로 업그레이드했습니다. 업그레이드가 다소 번거롭지 않을까 해서 미뤄두었던 것을 이제서야 하게 되었네요. 업그레이드는 생각보다 손쉽게 되었습니다. 기존 태터툴즈 소스 위에 최신 텍스트큐브 소스를 덮어 씌우고 설정만 고쳐주었더니 나머지 처리는 알아서 해주는 듯 합니다. 그런데 기존에 사용하던 플러그인 중에 몇몇은 사용할 수 없게 되었고 소스를 일부 수정해서 사용하던 것이 무용지물이 되었습니다. 다시 나의 필요에 맞게 튜닝을 하고 쓸만한 플러그인이 있는지 찾아봐야겠군요.
이것 저것 살펴보던 중에 관리자 페이지 리퍼러 기록 부분이 맘에 차지 않아서 어제 야밤에 관련 소스를 수정하게 되었습니다. 혹시 필요하신 분들이 있을지 모르니 그런 분이 있다면 아래 소스를 참고하세요. 리퍼러 로그 플러그인 소스의 일부를 아래에 소개한 소스 코드로 변경하면 검색 주소인 경우 아래 그림과 같이 주소 앞부분에 검색어를 강조해서 노출시켜주고 날짜에 시간도 함께 표시해줍니다.
설치형 블로그나 개인 홈페이지를 운영하다 보면 인터넷 서퍼들이 어떤 경로로 나의 홈페이지에 유입되는지에 관심이 생기며 때로는 리퍼러 로그를 보는 것이 홈페이지를 운영하는 즐거움 중에 하나입니다. 더 나아가 블로그의 운영 방향을 결정하는 자료로 활용되기도 하고 홈페이지 마케팅의 기초 자료가 되기도 합니다. 그 중에 리퍼러 로그는 실시간으로 반응을 확인할 수 있는 자료이며 이를 명확하고 직관적으로 확인할 수 있게 해주는 것이 도움이 될 수 있습니다. 그런 차원에서 아래 제시한 소스는 작게나마 도움이 될 수 있습니다. 티스토리의 경우 아마 이런 플러그인이 지원 되는 것으로 알고 있고 검색어를 강조해주는 텍스트큐브용 플러그인도 있을 지 모르겠군요.

웹프로그래머의 홈페이지 정보 블로그 http://hompy.info

[리퍼러 기록 출력 화면]


[변경전 소스 코드:tc/plugins/PN_Referer_Default/index.php]
<?php
$more = false;
list($referers, $paging) = Statistics::getRefererLogsWithPage($_GET['page'], $perPage);
for ($i=0; $i<count($referers); $i++) {
    $record = $referers[$i];

    $className = ($i % 2) == 1 ? 'even-line' : 'odd-line';
    $className .= ($i == sizeof($referers) - 1) ? ' last-line' : '';
?>
<tr class="<?php echo $className;?> inactive-class" onmouseover="rolloverClass(this, 'over')" onmouseout="rolloverClass(this, 'out')">
        <td class="date"><?php echo Timestamp::formatDate($record['referred']);?></td>
        <td class="address"><a href="<?php echo misc::escapeJSInAttribute($record['url']);?>" onclick="window.open(this.href); return false;" title="<?php echo htmlspecialchars($record['url']);?>"><?php echo fireEvent('ViewRefererURL', htmlspecialchars(UTF8::lessenAsEm($record['url'], 70)), $record);?></a></td>
</tr>
<?php
}
?>


[변경후 소스 코드:tc/plugins/PN_Referer_Default/index.php]
<?php
$more = false;
list($referers, $paging) = Statistics::getRefererLogsWithPage($_GET['page'], $perPage);
for ($i=0; $i<count($referers); $i++) {
      $record = $referers[$i];

      $className = ($i % 2) == 1 ? 'even-line' : 'odd-line';
      $className .= ($i == sizeof($referers) - 1) ? ' last-line' : '';

      $record_url = urldecode($record['url']);
      if (iconv("UTF-8","UTF-8",$record_url)!=$record_url) {
                  $record_url = iconv("EUC-KR","UTF-8",$record_url);
      }
      $record_url_title = $record_url;
      $q_record_url = strstr($record_url,"&q");
      if (!$q_record_url) {
            $q_record_url = strstr($record_url,"?q");
            if ($q_record_url) $q_record_url[0] = "&";
      }
      if ($q_record_url) {
            $arr_record_url = split("&",$q_record_url);
            $arr_record_url = split("=",$arr_record_url[1]);
            $record_url = "<b>".$arr_record_url[1]."</b> : ".$record_url ;
      }

?>
<tr class="<?php echo $className;?> inactive-class" onmouseover="rolloverClass(this, 'over')" onmouseout="rolloverClass(this, 'out')">
      <td class="date"><?php echo date("m-d H:i",$record['referred']);?></td>
      <td class="address"><a href="<?php echo misc::escapeJSInAttribute($record['url']);?>" onclick="window.open(this.href); return false;" title="<?php echo htmlspecialchars($record_url_title);?>"><?php echo UTF8::lessenAsEm($record_url, 70);?></a></td>
</tr>
<?php
}
?>

Posted by 프로그래머

2008/05/27 08:39 2008/05/27 08:39

사용자 삽입 이미지
홈페이지 홍보를 위해 댓글을 다는 아르바이트도 있을 것이고 그것도 번거로워 매크로를 돌리는 사람도 있을 것이며 심지어는 댓글을 자동으로 등록해주는 봇을 만들어서 사용하는 사람도 있을 것입니다. 그로 인해 원치 않는 댓글, 성인 사이트나 다이어트 관련 사이트를 홍보한다거나 쇼핑몰을 홍보하는 등의 댓글이 내 블로그에도 심심치 않게 달리곤 합니다. 개인적으로 운영하는 패션 카페에서도 이런 상황이 연출 되고 주로 미용에 관련된 사이트를 댓글로 홍보하는데 그것을 수동으로 지우는 일이 만만치 않습니다. 어제는 블로그를 보니 엄청나게 많은 스팸 댓글이 달려있더군요. 이럴 경우 보통은 댓글에 규칙이 있어 적절한 키워드를 필터링에 등록하면 해결이 되었는데 이번 것은 별다른 규칙을 찾을 수 없어서 한동안 고민을 하다가 공백을 필터링 키워드로 등록하면 되겠다는 결론을 내렸고 공백을 이름 필터링에 등록하려 했으나 등록이 되지 않아 임의 키워드를 등록하고 DB에 접속해서 임의 키워드를 공백으로 수정해서 해결 했습니다. 그리고 댓글 관련 테이블을 보니 그동안 필터링에 의해 쌓여 있는 보이지 않는 댓글이 몇십만개나 되더군요. 일단 불필요 하니 제거를 했습니다. 그리고 오늘도 공백 키워드로 필터링 된 보이지 않는 댓글이 몇만개가 되어 제거하였습니다.
이런 스팸을 방어하는 일련의 과정들도 한 두번이면 재미삼아 해볼 수 있겠지만 끊임없이 새로운 아이디어로 진화된 스팸이 뿌려진다면 아마도 스트레스가 될 것 같군요. 스팸을 효율적으로 방어하거나 이미 노출된 스팸을 간편하게 소탕할 수 있는 시스템이 지속적으로 개발되고 공유되어야 불필요하게 낭비된 시간과 네트웍 및 시스템 자원 그리고 여타 비용 등을 절감할 수 있게 될 것입니다.

참고로 저의 블로그 환경설정에 다음과 같은 키워드들이 필터로 등록되어 있습니다.
- 홈페이지 필터링 : sex, fuck, girl, women, woman, -
- 본문 필터링 : 다이어트, 대출, 신용, 감량, 바카라, 강원랜드
- 이름 필터링 : 공백(" ")

웹프로그래머의 홈페이지정보 블로그 http://hompy.info

Posted by 프로그래머

2008/05/24 08:31 2008/05/24 08:31
, , , , , , , , ,
Response
No Trackback , 3 Comments
RSS :
http://hompy.info/rss/response/474

선택한 윈도우를 항상 위로 보이도록 해주는 유틸리티 Vitrite (Always On Top)입니다. 해당 윈도우의 투명도 조절도 가능합니다. 비슷한 유틸리티들이 많이 있겠지만 소스도 함께 공개되어 있으니 프로그래머라면 소스를 수정해서 자신의 구미에 맞게 수정할 수도 있는 장점이 있을듯 합니다. 이미지나 플래시 또는 동영상을 항상 위에 올려놓고 보고 싶을 때가 있는데 그래서 여기 저기 검색을 하다가 찾아낸 눈에 띄는 유틸리티입니다. 멀티미디어 파일을 윈도우 타이틀와 테두리 없이 해당 멀티미디어 파일만 보여주는 유틸리티가 있으면 좋겠는데 찾지 못했네요. 그런 유틸리티를 알고 계신 분은 제보 바랍니다.^^


http://home.insightbb.com/~ryanvm/tinyutilities/vitrite/

* 기능키
[CTRL]+[SHIFT]+[+], [CTRL]+[SHIFT]+[-], [CTRL]+[SHIFT]+[1] ~ [CTRL]+[SHIFT]+[9]

사용자 삽입 이미지

웹프로그래머의 홈페이지정보 블로그 http://hompy.info

Posted by 프로그래머

2007/12/27 08:39 2007/12/27 08:39

아는 분이 블로그 주소를 바꾸려고 하는데 기존 링크나 트랙백이 무용지물이 되는 문제가 생긴다고 해결책에 대해 메신져로 물어오셨습니다. 업무중이라 자세한 답을 해드리지 못해 이렇게 글로 남겨봅니다.

아래 자바스크립트에서 빨간색 부분을 기존 블로그 주소로, 파란색 부분을 새로 바뀔 블로그 주소로 수정해주시고 수정하신 스크립트를 블로그 스킨 상단 또는 하단에 부착하세요. 그리고 녹색 부분은 2초 후에 페이지를 전환한다는 의미가 됩니다. 바로 페이지가 전환되기를 희망한다면 setTimeout("go_mypage()",2000); 를 go_mypage(); 로 교체 하세요. 샘플 화면을 보시려면 아래 링크를 클릭해보세요.
http://www.hompydesign.com/tt/332

<script>
function switch_page(src, dest){
  var url = location.href.replace(src,dest);
  location.replace(url);
}
function go_mypage() {
  switch_page('www.hompydesign.com/tt','hompy.info');
}
setTimeout("go_mypage()",2000);
</script>

웹프로그래머의 홈페이지정보 블로그 http://hompy.info

Posted by 프로그래머

2007/11/30 11:56 2007/11/30 11:56
, , , , , , , , ,
Response
2 Trackbacks , 7 Comments
RSS :
http://hompy.info/rss/response/332

일정한 패턴을 가진 수백개의 파일을 수정해야 할때 여러분은 어떻게 처리하시나요? 어찌보면 나름대로 피곤할 수 있는 일을 처리할 수 있는 방법론입니다.

1) 아래 예제 처럼, find,grep,xargs,perl,vi 와 같은 명령어 그리고 정규식을 이용한다.
find . -type f  \( -name "*.txt" -o -name "*.doc" \) | xargs perl -pi -e "s/변경전/변경후/g"
find . -name "*.txt" -exec perl -pi -e "s/변경전/변경후/g" {} \; 2>/dev/null
perl -pi -e "s/변경전/변경후/g" *.txt
vi -c "%s/변경전/변경후/g" -c "wq" test.txt

2) bash, csh, perl, php 와 같은 스크립트 언어로 변경해주는 코드를 만든다.

3) java 나 c 와 같은 고급 언어로 변경해주는 코드를 만든다.

4) Editplus 와 같은 편집기의 바꾸기 기능에서 정규식을 이용한다.

5) 매크로 기능이나 매크로 프로그램을 이용한다.

6) 다행히 바꿔주는 전용 유틸리티 프로그램이 있다면 그것을 이용한다.

7) 편집기에서 찾기, 바꾸기 기능을 이용한다.

8) 편집기로 일일이 확인해서 바꿔준다.

9) 부하 직원을 시켜 바꾸게 한다.

10) 아르바이트를 고용해서 바꾸게 한다.

11) 이도저도 귀찮다면, 다른 직업을 가질 것을 심각하게 고민해본다.

위에 열거한 방법들 중에서 어느 것이 보다 효율적이고 유익하다라고 단정할 수 없습니다. 각기 나름대로의 장단점을 가지고 있으니 적절하게 혼합해서 사용할 것을 권합니다.

웹프로그래머의 홈페이지정보 블로그 http://hompy.info

Posted by 프로그래머

2007/11/26 18:50 2007/11/26 18:50

국내 통신사와 IDC 네임서버(DNS) IP.

간혹 네임서버(DNS) 문제로 인터넷 접속이 안될 경우가 있습니다. 설정한 네임서버에서 브라우져에 입력한 도메인 주소의 IP를 찾아주지 못해 생기는 문제입니다. 그럴땐 네트워크 설정에서 DNS IP 주소를 아래 열거하는 목록 중에 하나 선택해서 바꿔주면 해결됩니다.

회사 IP #1 IP #2 IP #3 도메인 #1 도메인 #2 도메인 #3
KT 168.126.63.1 168.126.63.2   kns.kornet.co.kr kns2.kornet.net  
하나로 210.220.163.82 219.250.36.130 210.94.6.67 qns1.hananet.net qns2.hananet.net qns3.hananet.net
하나로 210.94.0.73 221.139.13.130 210.180.98.74 cns1.hananet.net cns2.hananet.net cns3.hananet.net
하나로 211.58.252.62 211.58.252.94   ns.ngene.net ns2.ngene.net  
하나로 210.181.1.41 210.181.4.51   ns.dreamx.com ns2.dreamx.com  
하나로 131.107.1.7 210.94.0.7   ins1.hananet.net ins2.hananet.net  
두루넷 210.117.65.1 210.117.65.2   nsgr1.thrunet.com nsgr2.thrunet.com  
신비로 202.30.143.11 203.30.143.11   ns.shinbiro.com ns2.shinbiro.com  
데이콤 164.124.101.2 203.248.240.31   ns.dacom.co.kr ns2.dacom.co.kr  
드림라인 210.181.1.24 210.181.4.25   ns.cjdream.net ns2.cjdream.net  
파워콤 164.124.107.9 203.248.252.2   cns2.bora.net cns3.bora.net  
KT IDC 211.63.213.176 61.78.38.120   ns.kt-idc.com ns2.kt-idc.com  
데이콤IDC 203.248.250.24 203.248.250.25   ns1.kidc.net ns2.kidc.net  
하나로IDC 211.58.252.62 211.58.252.94   ns.ngene.net ns2.ngene.net  

[내 네트워크 환경] 에서 [네트워크 연결 보기] 선택

활성화된 [로컬 영역 연결] 선택 마우스 오른쪽 버튼 클릭하고 [속성] 선택

사용자 삽입 이미지

[다음 DNS 서버 주소 사용] 선택 / 수정하고 싶은 IP 로 변경

사용자 삽입 이미지

웹프로그래머의 홈페이지정보 블로그 http://hompy.info

Posted by 프로그래머

2007/11/18 13:24 2007/11/18 13:24
, , , , , , , ,
Response
No Trackback , a comment
RSS :
http://hompy.info/rss/response/313

같이 일하는 동료 기자분들이 사진에 워터마크를 간편하게 찍을 수 있는 툴이 필요하다고 해서 검색을 하다보니 쓸만한 툴이 하나 보이더군요. 사진 크기 조절과 워터마크 삽입을 간편하게 일괄 처리 해주는 무료 프로그램 VSO Image Resizer 라는 툴입니다.
첨부된 프로그램을 설치하고 실행하면 사진 또는 이미지를 선택 하게 되어 있고 사진들을 정하고 나면 아래와 같은 화면이 나옵니다.
사용자 삽입 이미지

Resolution 옆에 More 를 클릭하면 아래 화면처럼 펼쳐지며...
사용자 삽입 이미지

Intergrate watermark 를 체크하고 Watermark 를 클릭해서 이미지를 선택하면 사진에 자신만의 로고나 이미지를 찍을 수 있습니다. 사진 포멧, 해상도나 품질등을 변경하고 Ok 버튼을 누르면 선택한 사진들에 나만의 도장(워터마크)를 찍어주며 이미지도 변환해줍니다.
사진이나 이미지를 대량으로 관리해야 하는 사람들은 가볍게 사용할 수 있는 툴이네요.

웹프로그래머의 홈페이지정보 블로그 http://hompy.info

Posted by 프로그래머

2007/11/14 17:35 2007/11/14 17:35
, , , , , , , , ,
Response
No Trackback , 7 Comments
RSS :
http://hompy.info/rss/response/309


블로그 이미지

유니티 아이폰 안드로이드 게임 개발 (Unity3D, iPhone, iOS, Android, Game) , 독립 게임 개발자 (Indie Game Developer)

- 프로그래머

Archives

Calendar

«   2015/03   »
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31        

Site Stats

Total hits:
6103169
Today:
503
Yesterday:
1341