Search Results

Search found 5249 results on 210 pages for 'multi tier'.

Page 192/210 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • x11vnc working in Ubuntu 10.10

    - by pablorc
    I'm trying to start x11vnc in a Ubuntu 10.10 (my server is in Amazon EC2), but I have the next error $ sudo x11vnc -forever -usepw -httpdir /usr/share/vnc-java/ -httpport 5900 -auth /usr/sbin/gdm 25/11/2010 13:29:51 passing arg to libvncserver: -httpport 25/11/2010 13:29:51 passing arg to libvncserver: 5900 25/11/2010 13:29:51 -usepw: found /home/ubuntu/.vnc/passwd 25/11/2010 13:29:51 x11vnc version: 0.9.10 lastmod: 2010-04-28 pid: 3504 25/11/2010 13:29:51 XOpenDisplay(":0.0") failed. 25/11/2010 13:29:51 Trying again with XAUTHLOCALHOSTNAME=localhost ... 25/11/2010 13:29:51 *************************************** 25/11/2010 13:29:51 *** XOpenDisplay failed (:0.0) *** x11vnc was unable to open the X DISPLAY: ":0.0", it cannot continue. *** There may be "Xlib:" error messages above with details about the failure. Some tips and guidelines: ** An X server (the one you wish to view) must be running before x11vnc is started: x11vnc does not start the X server. (however, see the -create option if that is what you really want). ** You must use -display <disp>, -OR- set and export your $DISPLAY environment variable to refer to the display of the desired X server. - Usually the display is simply ":0" (in fact x11vnc uses this if you forget to specify it), but in some multi-user situations it could be ":1", ":2", or even ":137". Ask your administrator or a guru if you are having difficulty determining what your X DISPLAY is. ** Next, you need to have sufficient permissions (Xauthority) to connect to the X DISPLAY. Here are some Tips: - Often, you just need to run x11vnc as the user logged into the X session. So make sure to be that user when you type x11vnc. - Being root is usually not enough because the incorrect MIT-MAGIC-COOKIE file may be accessed. The cookie file contains the secret key that allows x11vnc to connect to the desired X DISPLAY. - You can explicitly indicate which MIT-MAGIC-COOKIE file should be used by the -auth option, e.g.: x11vnc -auth /home/someuser/.Xauthority -display :0 x11vnc -auth /tmp/.gdmzndVlR -display :0 you must have read permission for the auth file. See also '-auth guess' and '-findauth' discussed below. ** If NO ONE is logged into an X session yet, but there is a greeter login program like "gdm", "kdm", "xdm", or "dtlogin" running, you will need to find and use the raw display manager MIT-MAGIC-COOKIE file. Some examples for various display managers: gdm: -auth /var/gdm/:0.Xauth -auth /var/lib/gdm/:0.Xauth kdm: -auth /var/lib/kdm/A:0-crWk72 -auth /var/run/xauth/A:0-crWk72 xdm: -auth /var/lib/xdm/authdir/authfiles/A:0-XQvaJk dtlogin: -auth /var/dt/A:0-UgaaXa Sometimes the command "ps wwwwaux | grep auth" can reveal the file location. Starting with x11vnc 0.9.9 you can have it try to guess by using: -auth guess (see also the x11vnc -findauth option.) Only root will have read permission for the file, and so x11vnc must be run as root (or copy it). The random characters in the filenames will of course change and the directory the cookie file resides in is system dependent. See also: http://www.karlrunge.com/x11vnc/faq.html I've already tried with some -auth options but the error persist. I have gdm running. Thank you in advance

    Read the article

  • The best dvd ripper software in 2014 review

    - by user328170
    The top 3 DVD Ripping Tools in 2014 Nowadays everyone may have several smart mobile devices, such as iphone, ipad air, ipad mini ,Samsung Galaxy and Sony Xperia. If you want to take your movies with your mobile devices, or sometimes just want to backup those classic physical discs on your notebook or workstation with high quality resolutions, you need a fast and stable software to rip them and convert them to the format you like. Fortunately, there are plenty of great software products designed to make the process easy and transform DVD to the files that are playable on any mobile device you choose. We have done a full review on dozens of products. Here are five of the best, based on our review. We test the software from its ripping speed, friendly use guide , reliability and ripping capability. The top one is still Winx DVD Ripper platinum. We've test its 6.1 version 2 years ago for its ability to quickly and easily rip DVDs and Blu-ray discs to high quality MKV files with a single click. It gave us deep impression in the test. This time we test it’s lastest 7.3.5 version. Besides easy use and speed, we test its capability to decrypt all kinds of discs with different protect method, for example, Disney X-project DRM , Sony ArccOS, RCE and region code. The result shows that winx dvd ripper platinum still maintain its advantages in all the area. Winx dvd ripper platinum is a more focused on DVD ripping software with the basic duty to rip and convert DVD. The color of UI is a modern technical sense. All the main functions are shown obviously while others specials are hidden for advanced users, making it more clear and convenient to make option. There are two company weisoft limited and Digiarty who can provide the software. Weisoft limited focus on USA, UK and Australia market. Digiarty focus on others. ripping speed ????? friendly use guide ????? reliability ????? ripping capability ????? The second one DVDFab DVDFab is also very robust during ripping dics. It can also decrypt most of the dics in the market. The shortage it still friendly use and speed. We'd note that the app is frequently updated to cut through the copy protection on even the latest DVDs and Blu-ray discs . The app is shareware, meaning most features are free, including decrypting and ripping to your hard drive. Many of you note that you use another app for compression and authoring, but many of you say they hey, storage is cheap, and the rips from DVDFab are easy, one-click, and work. ripping speed ??? friendly use guide ???? reliability ????? ripping capability ????? The Third one is Handbrake Handbrake is our favorite video encoder for a reason: it's simple, easy to use, easy to install, and offers a lots of options to get the high quality file as a result. If you're scared by them, you don't even have to use them—the app will compensate for you and pick some settings it thinks you'll like based on your destination device. So many of you like Handbrake that many of you use it in conjunction with another app (like VLC, which makes ripping easy)—you'll let another app do the rip and crack the DRM on your discs, and then process the file through Handbrake for encoding. The app is fast, can make the most of multi-core processors to speed up the process, and is completely open source. ripping speed ??? friendly use guide ???? reliability ???? ripping capability ????

    Read the article

  • Visual Studio build fails: unable to copy exe-file from obj\debug to bin\debug

    - by Nailuj
    This is a question asked before, both here on Stack Overflow and other places, but none of the suggestions I've found this far has helped me, so I just have to try asking a new question. Scenario: I have a simple Windows Forms application (C#, .NET 4.0, Visual Studio 2010). It has a couple of base forms that most other forms inherit from, it uses Entity Framework (and POCO classes) for database access. Nothing fancy, no multi-threading or anything. Problem: All was fine for a while. Then, all out of the blue, Visual Studio failed to build when I was about to launch the application. I got the warning "Unable to delete file '...bin\Debug\[ProjectName].exe'. Access to the path '...bin\Debug\[ProjectName].exe' is denied." and the error "Unable to copy file 'obj\x86\Debug\[ProjectName].exe' to 'bin\Debug\[ProjectName].exe'. The process cannot access the file 'bin\Debug\[ProjectName].exe' because it is being used by another process." (I get both the warning and the error when running Rebuild, but only the error when running Build - don't think that is relevant?) I understand perfectly fine what the warning and error message says: Visual Studio is obviously trying to overwrite the exe-file while it the same time has a lock on it for some reason. However, this doesn't help me find a solution to the problem... The only thing I've found working is to shut down Visual Studio and start it again. Building and launching then works, untill I make a change in some of the forms, then I have the same problem again and have to restart... Quite frustrating! As I mentioned above, this seems to be a known problem, so there are lots of suggested solutions. I'll just list what I've already tried here, so people know what to skip: Creating a new clean solution and just copy the files from the old solution. Adding the following to the following to the project's pre-build event: if exist "$(TargetPath).locked" del "$(TargetPath).locked"   if not exist "$(TargetPath).locked" if exist "$(TargetPath)" move "$(TargetPath)" "$(TargetPath).locked" Adding the following to the project properties (.csproj file): <GenerateResourceNeverLockTypeAssembliestrue</GenerateResourceNeverLockTypeAssemblies However, none of them worked for me, so you can probably see why I'm starting to get a bit frustrated. I don't know where else to look, so I hope somebody has something to give me! Is this a bug in VS, and if so is there a patch? Or has I done something wrong, do I have a circular reference or similar, and if so how could I find out? Any suggestions are highly appreciated :)

    Read the article

  • .Net Crystal Report printing application running on termianal service connection errors when session

    - by MrEdmundo
    I have created a .Net application to run on an App Server that gets requests for a report and prints out the requested report. The C# application uses Crystal Reports to load the report and subsequently print it out. The application is run on Server which is connected to via a Remote Desktop connection under a particular user account (required for old apps). When I disconnect from the Remote Session the application starts raising exceptions such as: Message: CrystalDecisions.Shared.CrystalReportsException: Load report failed This type of error is never raised when the Remote Session is active. The server running the app is running Windows Server 2003, my box which creates the connection is Windows XP. I appreciate this is fairly weird, however I cannot see any problem with the application deployment I have created. Does anyone know what could be cause this issue? EDIT: I bit the bullet and created the application as a windows service, obviously this doesn't take long I just wasn't convinced it would solve the problem. Anyway it doesn't!!! I have also tried removing the multi-thread code that was calling the print function asynchronously. I did this in order to simply the app and narrow down the reason it could fail. Anyway, this didn't improve the situation either! EDIT: The two errors I get are: System.Runtime.InteropServices.COMException (0x80000201): Invalid printer specified. at CrystalDecisions.ReportAppServer.Controllers.PrintOutputControllerClass.ModifyPrinterName(String newVal) at CrystalDecisions.CrystalReports.Engine.PrintOptions.set_PrinterName(String value) at Dsa.PrintServer.Service.Service.PrintCrystalReport(Report report) The printer isn't invalid, this is confirmed when 60 seconds later the time ticks and the report is printed successfully. And The request could not be submitted for background processing. at CrystalDecisions.ReportAppServer.Controllers.ReportSourceClass.GetLastPageNumber(RequestContext pRequestContext) at CrystalDecisions.ReportSource.EromReportSourceBase.GetLastPageNumber(ReportPageRequestContext reqContext) --- End of inner exception stack trace --- at CrystalDecisions.ReportAppServer.ConvertDotNetToErom.ThrowDotNetException(Exception e) at CrystalDecisions.ReportSource.EromReportSourceBase.GetLastPageNumber(ReportPageRequestContext reqContext) at CrystalDecisions.CrystalReports.Engine.FormatEngine.PrintToPrinter(Int32 nCopies, Boolean collated, Int32 startPageN, Int32 endPageN) at CrystalDecisions.CrystalReports.Engine.ReportDocument.PrintToPrinter(Int32 nCopies, Boolean collated, Int32 startPageN, Int32 endPageN) at Dsa.PrintServer.Service.Service.PrintCrystalReport(Report report) EDIT: I ran filemon to check if there were any access issue. At the point when the error occurs file mon reports Request: OPEN | Path: C:\windows\assembly\gac_msil\system\2.0.0.0__b77a5c561934e089\ws2_32.dll | Result: NOT FOUND | Other: Attributes Error

    Read the article

  • Yesterday's broken codebase hunt me back

    - by sandun dhammika
    I need a fun oky. I just love this openmoko hardware and hacking into it. Please could somebody help me to compile qemu.I 'm so sad and I want to compile qemu and it required the GCC3.x and then I downloaded gcc 3.2 but when I configure it and build it, it gives a very sad error message. G_FOR_TARGET=" "SHELL=/bin/sh" "EXPECT=expect" "RUNTEST=runtest" "RUNTESTFLAGS=" "exec_prefix=/gcc-3.2" "infodir=/gcc-3.2/info" "libdir=/gcc-3.2/lib" "prefix=/gcc-3.2" "tooldir=/gcc-3.2/i686-pc-linux-gnu" "AR=ar" "AS=as" "CC=gcc" "CXX=c++" "LD=ld" "LIBCFLAGS=-g -O2" "NM=nm" "PICFLAG=" "RANLIB=ranlib" "DESTDIR=" DO=all multi-do make[1]: Leaving directory `/gcc-3.2/gcc-3.2/zlib' make[1]: Entering directory `/gcc-3.2/gcc-3.2/fastjar' make[1]: Leaving directory `/gcc-3.2/gcc-3.2/fastjar' make[1]: Entering directory `/gcc-3.2/gcc-3.2/gcc' gcc -c -DIN_GCC -g -O2 -W -Wall -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes -Wtraditional -pedantic -Wno-long-long -DHAVE_CONFIG_H -DGENERATOR_FILE -I. -I. -I. -I./. -I./config -I./../include ./read-rtl.c -o read-rtl.o In file included from ./read-rtl.c:24:0: ./rtl.h:125:3: warning: type of bit-field ‘code’ is a GCC extension ./rtl.h:128:3: warning: type of bit-field ‘mode’ is a GCC extension ./read-rtl.c: In function ‘fatal_with_file_and_line’: ./read-rtl.c:61:1: warning: traditional C rejects ISO C style function definitions ./read-rtl.c: In function ‘read_rtx’: ./read-rtl.c:662:8: error: lvalue required as increment operand make[1]: *** [read-rtl.o] Error 1 make[1]: Leaving directory `/gcc-3.2/gcc-3.2/gcc' make: *** [all-gcc] Error 2 This is so sad and this is sooo bad. I have searched patches and workaround all over the Internet to this,but I couldn't find any alternative for this. I'm out of my patience now. I want that virtual machine ready and I want to make a debug host cos I don't have some money to buy original neo 1937 hardware. The patch that I have found comes with a nasty error too. I'm so sick of it.Any idea how could I fix this problem and make this work? Please please I'm begging you somebody help me please. Thanks all.

    Read the article

  • Uploadify Hanging at random on 100%

    - by Matty
    I am using Uploadify (http://www.uploadify.com/) to enable my users to upload images via my web application. The problem I am having is that every now and then (at what appears to be random) when the progress bar reaches 100% it 'hangs' and does nothing. I was wondering if any developers familiar with uploadify may have any idea how to solve this? I am in desperate need of some help. Here is my front-end code: jQuery(document).ready(function() { jQuery("#uploadify").uploadify({ 'uploader' : 'javascripts/uploadify.swf', 'script' : 'upload-file2.php', 'cancelImg' : 'css/images/cancel.png', 'folder' : 'uploads/personal_images/' + profileOwner, 'queueID' : 'fileQueue', 'auto' : true, 'multi' : true, 'fileDesc' : 'Image files', 'fileExt' : '.jpg;.jpeg;.gif;.png', 'sizeLimit' : '2097152', 'onComplete': function(event, queueID, fileObj, response, data) { processPersonalImage(fileObj.name); arrImgNames.push(fileObj.name); showUploadedImages(true); document.getElementById("photos").style.backgroundImage = "url('css/images/minicam.png')"; }, 'onAllComplete' : function() { completionMessage(arrFailedNames); document.getElementById("displayImageButton").style.display = "inline"; document.getElementById("photos").style.backgroundImage = "url('css/images/minicam.png')"; }, 'onCancel' : function() { arrImgNames.push(fileObj.name); arrFailedNames.push(fileObj.name); showUploadedImages(false); }, 'onError' : function() { arrImgNames.push(fileObj.name); arrFailedNames.push(fileObj.name); showUploadedImages(false); } }); }); And server side: if (!empty($_FILES)) { //Get user ID from the file path for use later.. $userID = getIdFromFilePath($_REQUEST['folder'], 3); $row = mysql_fetch_assoc(getRecentAlbum($userID, "photo_album_personal")); $subFolderName = $row['pk']; //Prepare target path / file.. $tempFile = $_FILES['Filedata']['tmp_name']; $targetPath = $_SERVER['DOCUMENT_ROOT'] . $_REQUEST['folder'] . '/'.$subFolderName.'/'; $targetFile = str_replace('//','/',$targetPath) . $_FILES['Filedata']['name']; //Move uploaded file from temp directory to new folder move_uploaded_file($tempFile,$targetFile); //Now add a record to DB to reflect this personal image.. if(file_exists($targetFile)) { //add photo record to DB $directFilePath = $_REQUEST['folder'] . '/'.$subFolderName.'/' . $_FILES['Filedata']['name']; addPersonalPhotoRecordToDb($directFilePath, $row['pk']); } echo "1"; die(true); } thanks for any help!!

    Read the article

  • android: two issues using Tablerow+TextView in Tablelayout

    - by Yang
    I am using Tablerow+TextView to make a simple view for blog posts and their replies. In each TableRow I put a TextView in. Now I have two issues: The text which is longer than the screen won't automatically wrap up to be multi-line. Is it by design of TableRow? I've already set tr_content.setSingleLine(false); [update] This has been addressed, I think I should change Fill_parent to be Wrap_content in textView.tr_author_time.setLayoutParams(new LayoutParams( LayoutParams.**WRAP_CONTENT**, LayoutParams.WRAP_CONTENT)); The Table won't scroll like ListView. My rows are more than the screen size. I expect the table could be scrolled down for viewing just like ListView. Is that possible? Here is my code: TableLayout tl = (TableLayout) findViewById(R.id.article_content_table); TextView tr_title = new TextView(this); TextView tr_author_time = new TextView(this); TextView tr_content = new TextView(this); TableRow tr = new TableRow(this); for(int i = 0; i < BlogPost.size(); i++){ try{ // add the author, time tr = new TableRow(this); /////////////////add author+time row BlogPost article = mBlogPost.get(i); tr_author_time = new TextView(this); tr_author_time.setText(article.author+"("+ article.post_time+")"); tr_author_time.setTextColor(getResources().getColor(R.color.black)); tr_author_time.setGravity(0x03); tr_author_time.setLayoutParams(new LayoutParams( LayoutParams.FILL_PARENT, LayoutParams.WRAP_CONTENT)); tr.addView(tr_author_time); tl.addView(tr,new TableLayout.LayoutParams( LayoutParams.FILL_PARENT, LayoutParams.WRAP_CONTENT)); ////////////////////// then add content row tr = new TableRow(this); tr_content = new TextView(this); tr_content.setText(article.content); tr_content.setSingleLine(false); tr_content.setGravity(0x03); tr_content.setLayoutParams(new LayoutParams( LayoutParams.FILL_PARENT, LayoutParams.WRAP_CONTENT)); tr.addView(tr_content); tr.setBackgroundResource(R.color.white); tl.addView(tr,new TableLayout.LayoutParams( LayoutParams.FILL_PARENT, LayoutParams.WRAP_CONTENT)); }

    Read the article

  • C# Monte Carlo Incremental Risk Calculation optimisation, random numbers, parallel execution

    - by m3ntat
    My current task is to optimise a Monte Carlo Simulation that calculates Capital Adequacy figures by region for a set of Obligors. It is running about 10 x too slow for where it will need to be in production and number or daily runs required. Additionally the granularity of the result figures will need to be improved down to desk possibly book level at some stage, the code I've been given is basically a prototype which is used by business units in a semi production capacity. The application is currently single threaded so I'll need to make it multi-threaded, may look at System.Threading.ThreadPool or the Microsoft Parallel Extensions library but I'm constrained to .NET 2 on the server at this bank so I may have to consider this guy's port, http://www.codeproject.com/KB/cs/aforge_parallel.aspx. I am trying my best to get them to upgrade to .NET 3.5 SP1 but it's a major exercise in an organisation of this size and might not be possible in my contract time frames. I've profiled the application using the trial of dotTrace (http://www.jetbrains.com/profiler). What other good profilers exist? Free ones? A lot of the execution time is spent generating uniform random numbers and then translating this to a normally distributed random number. They are using a C# Mersenne twister implementation. I am not sure where they got it or if it's the best way to go about this (or best implementation) to generate the uniform random numbers. Then this is translated to a normally distributed version for use in the calculation (I haven't delved into the translation code yet). Also what is the experience using the following? http://quantlib.org http://www.qlnet.org (C# port of quantlib) or http://www.boost.org Any alternatives you know of? I'm a C# developer so would prefer C#, but a wrapper to C++ shouldn't be a problem, should it? Maybe even faster leveraging the C++ implementations. I am thinking some of these libraries will have the fastest method to directly generate normally distributed random numbers, without the translation step. Also they may have some other functions that will be helpful in the subsequent calculations. Also the computer this is on is a quad core Opteron 275, 8 GB memory but Windows Server 2003 Enterprise 32 bit. Should I advise them to upgrade to a 64 bit OS? Any links to articles supporting this decision would really be appreciated. Anyway, any advice and help you may have is really appreciated.

    Read the article

  • What IDE to use for Python

    - by husayt
    As a Python newbie, it is interesting to know what IDE's ("GUIs/editors") others use for Python coding. If you can just give the name (e.g. Textpad, Eclipse ..) that will be enough. If it is already mentioned, you can just vote for it. But if you can also give some more comparative information, that will be much appreciated. Thanks. Update: Results so far PyDev with Eclipse (CP, F, AC, PD, EM, SI, MLS, UML, SC, UT, LN, CF, BM) Komodo (CP, C/F, MLS, PD, AC, SC, SI, BM, LN, CF, CT) Emacs (CP, F, AC, MLS, PD, EM, SC, SI, BM, LN, CF, CT, UT, UML) Vim (CP, F, AC, MLS, SI, BM, LN, CF ) TextMate (Mac, CT, CF, MLS, SI, BM, LN) Gedit (Linux, F, AC, MLS, BM, LN, CT [sort of]) Idle (CP, F, AC) PIDA (Linux, CP, F, AC, MLS, SI, BM, LN, CF)(VIM Based) NotePad++ (Windows) BlueFish (Linux) JEdit (CP, F, BM, LN, CF, MLS) E-Texteditor (TextMate Clone for Windows) WingIde (CP, C, AC, MLS (support for C), PD, EM, SC, SI, BM, LN, CF, CT, UT) Eric Ide (CP, F, AC, PD, EM, SI, LN, CF, UT) Pyscripter (Windows, F, AC, PD, EM, SI, LN, CT, UT) ConTEXT (Windows, C) SPE (F, AC, UML) SciTE (CP, F, MLS, EM, BM, LN, CF, CT, SH) Zeus (W, C, BM, LN, CF, SI, SC, CT) NetBeans (CP, F, PD, UML, AC, MLS, SC, SI, BM, LN, CF, CT, UT, RAD) DABO (CP) BlackAdder (C, CP, CF, SI) PythonWin (W, F, AC, PD, SI, BM, CF) Geany (CP, F, very limited AC, MLS, SI, BM, LN, CF) UliPad (CP, F, AC, PD, MLS, SI, LI, CT, UT, BM) Boa Constructor (CP, F, AC, PD, EM, SI, BM, LN, UML, CF, CT) ScriptDev (W, C, AC, MLS, PD, EM, SI, BM, LN, CF, CT) Spider (CP, F, AC) Editra (CP, F, AC, MLS, SC, SI, BM, LN, CF) Pfaide (Windows, C, AC, MLS, SI, BM, LN, CF, CT) KDevelop (CP, F, MLS, SC, SI, BM, LN, CF) Acronyms used: CP - Cross Platfom C - Commercial F - Free AC - Automatic Code-completion MLS - Multi-Language Support PD - Integrated Python Debugging EM - ErrorMarkup SC - Source Control integration SI - Smart Indent BM - Bracket Matching LN - Line Numbering UML - UML editing / viewing CF - Code Folding CT - Code Templates UT - Unit Testing UID - Gui Designer (e.g. QT, Eric, ..) DB - integrated database support RAD - Rapid app development support I don't mention basics like Syntax highlighting as I expect these by default. This is a just dry list reflecting your feedback and comments, I am not advocating any of these tools. I will keep updating this list as you keep posting your answers. PS. Can you help me to add features of the above editors to the list (like autocomplete, debugging, or etc)?

    Read the article

  • how to use SQL wildcard % with Queryset extra>select?

    - by tylias
    I'm trying to add weights to search terms I'm using to filter a queryset. Using the '%' wildcard is causing me some problems. I'm using the extra() modifier to add a weight parameter to the queryset, which I will be using to inform a sort ordering. (See http://docs.djangoproject.com/en/1.1/ref/models/querysets/#extra-select-none-where-none-params-none-tables-none-order-by-none-select-params-none ) Here's the gist of the code: def viewname(request) ... exact_matchstrings="" exact_matchstrings.append("(accountprofile.first_name LIKE '" + term + "')") exact_matchstrings.append("(accountprofile.first_name LIKE '" + term + '\%' + "')") extraquerystring = " + ".join(exact_matchstrings) return_queryset = return_queryset.extra( select = { 'match_weight': extraquerystring }, ) The effect I'm going for is that if the search term matches exactly, the weight associated with the record is 2, but if the term merely starts with the search term and isn't an exact match, the weight is 1. (for example, if 'term' = 'Jon', an entry with first_name='Jon' gets a weight of 2 but an entry with an entry with first_name = 'Jonathan' gets a weight of 1.) I can test the statement in SQL and it seems to work well enough. If I make this SQL query from the mysql shell, no problem: select (first_name like "Carl") + (first_name like "Car%") from accountprofile; But trying to run it via the extra() modifier in my view code and evaluating the resulting queryset gives me the following error: Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 68, in __repr__ data = list(self[:REPR_OUTPUT_SIZE + 1]) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 83, in __len__ self._result_cache.extend(list(self._iter)) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 238, in iterator for row in self.query.results_iter(): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 287, in results_iter for rows in self.execute_sql(MULTI): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 2369, in execute_sql cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/util.py", line 22, in execute sql = self.db.ops.last_executed_query(self.cursor, sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/__init__.py", line 217, in last_executed_query return smart_unicode(sql) % u_params ValueError: unsupported format character ''' (0x27) at index 309 I've tried it escaping and not escaping % wildcard but that doesn't solve the problem. Doesn't seem to affect it at all, really. Any ideas?

    Read the article

  • Zend Framework + Uplodify Flash Uploader Troubles

    - by Richard Knop
    I've been trying to get the Uploadify flash uploader (www.uploadify.com) to work with Zend Framework, with no success so far. I have placed all Uploadify files under /public/flash-uploader directory. In the controller I include all required files and libraries like this: $this->view->headScript()->appendFile('/js/jquery-1.3.2.min.js'); $this->view->headLink()->appendStylesheet('/flash-uploader/css/default.css'); $this->view->headLink()->appendStylesheet('/flash-uploader/css/uploadify.css'); $this->view->headScript()->appendFile('/flash-uploader/scripts/swfobject.js'); $this->view->headScript()->appendFile('/flash-uploader/scripts/jquery.uploadify.v2.1.0.min.js'); And then I activate the plugin like this (#photo is id of the input file field): $(document).ready(function() { $("#photo").uploadify({ 'uploader' : '/flash-uploader/scripts/uploadify.swf', 'script' : 'my-account/flash-upload', 'cancelImg' : '/flash-uploader/cancel.png', 'folder' : 'uploads/tmp', 'queueID' : 'fileQueue', 'auto' : true, 'multi' : true, 'sizeLimit' : 2097152 }); }); As you can see I am targeting the my-account/flash-upload script as a backend processing (my-account is a controller, flash-upload is an action). My form markup looks like this: <form enctype="multipart/form-data" method="post" action="/my-account/upload-public-photo"><ol> <li><label for="photo" class="optional">File Queue<div id="fileQueue"></div></label> <input type="hidden" name="MAX_FILE_SIZE" value="31457280" id="MAX_FILE_SIZE" /> <input type="file" name="photo" id="photo" class="input-file" /></li> <li><div class="button"> <input type="submit" name="upload_public_photo" id="upload_public_photo" value="Save" class="input-submit" /></div></li></ol></form> And yet it's not working. The browse button doesn't even show up as in the demo page, I get only a regular input file field. Any ideas where could the problem be? I've already been staring into the code for hours and I cannot see any mistake anywhere and I'm starting to be exhausted after going through the same 30 lines of code 30 times in a row.

    Read the article

  • Synchronization requirements for FileStream.(Begin/End)(Read/Write)

    - by Doug McClean
    Is the following pattern of multi-threaded calls acceptable to a .Net FileStream? Several threads calling a method like this: ulong offset = whatever; // different for each thread byte[] buffer = new byte[8192]; object state = someState; // unique for each call, hence also for each thread lock(theFile) { theFile.Seek(whatever, SeekOrigin.Begin); IAsyncResult result = theFile.BeginRead(buffer, 0, 8192, AcceptResults, state); } if(result.CompletedSynchronously) { // is it required for us to call AcceptResults ourselves in this case? // or did BeginRead already call it for us, on this thread or another? } Where AcceptResults is: void AcceptResults(IAsyncResult result) { lock(theFile) { int bytesRead = theFile.EndRead(result); // if we guarantee that the offset of the original call was at least 8192 bytes from // the end of the file, and thus all 8192 bytes exist, can the FileStream read still // actually read fewer bytes than that? // either: if(bytesRead != 8192) { Panic("Page read borked"); } // or: // issue a new call to begin read, moving the offsets into the FileStream and // the buffer, and decreasing the requested size of the read to whatever remains of the buffer } } I'm confused because the documentation seems unclear to me. For example, the FileStream class says: Any public static members of this type are thread safe. Any instance members are not guaranteed to be thread safe. But the documentation for BeginRead seems to contemplate having multiple read requests in flight: Multiple simultaneous asynchronous requests render the request completion order uncertain. Are multiple reads permitted to be in flight or not? Writes? Is this the appropriate way to secure the location of the Position of the stream between the call to Seek and the call to BeginRead? Or does that lock need to be held all the way to EndRead, hence only one read or write in flight at a time? I understand that the callback will occur on a different thread, and my handling of state, buffer handle that in a way that would permit multiple in flight reads. Further, does anyone know where in the documentation to find the answers to these questions? Or an article written by someone in the know? I've been searching and can't find anything. Relevant documentation: FileStream class Seek method BeginRead method EndRead IAsyncResult interface

    Read the article

  • IE6 and fieldset background color?

    - by codemonkey613
    Hey, I'm having some difficulty with CSS and IE6 compatibility. URL: http://bit.ly/dlX7cS Problem #1: I put a background image on the fieldset around Canada and United States. In IE6 and IE7, the background bleeds above the border-top of the fieldset. So, I found a fix. It is applied only to IE browsers, and moves the legend up a few pixels, aligning the background correctly. <!-- Fix: IE6/IE7, Legends --> <!--[if lte IE 7]> <style type="text/css"> fieldset { position: relative; } fieldset legend { position: absolute; top: -0.5em; left: 0; } </style> <![endif]--> This fixes IE7. But in IE6, it seems to make my legend for Canada vanish completely. Does anyone have a copy of IE6 they can open my site and tell me if you see Canada label. (I am testing with a multi-IE program, and it keeps crashing. My copy might not be accurate). If it's not there, any suggestions on how to fix it? Also, any suggestion on where I can download working copy of IE6? Problem #2: I have a Google Map embedded using iframe. The width of that iframe is 515px. In Firefox, Chrome, IE7 -- that is the correct alignment. But in IE6, it gets <br/> underneath the Just Energy paragraph beside it. It doesn't fit. I have to change width to 513px for it to fit. Uhm, anyone know where those 2px of difference happen? I removed border, padding, margin from the iframe, but still something is happening. <!-- Google Maps --> <iframe class="gmap" src="http://maps.google.com/maps/ms?hl=en&amp;ie=UTF8&amp;msa=0&amp;msid=100146512697135839835.000481e2a2779e8865863&amp;ll=42,-100&amp;spn=20,80&amp;output=embed" frameborder="0" marginheight="0" marginwidth="0" scrolling="no"></iframe> <!-- / Google Maps --> Er, big headache. lol

    Read the article

  • Guides for PostgreSQL query tuning?

    - by Joe
    I've found a number of resources that talk about tuning the database server, but I haven't found much on the tuning of the individual queries. For instance, in Oracle, I might try adding hints to ignore indexes or to use sort-merge vs. correlated joins, but I can't find much on tuning Postgres other than using explicit joins and recommendations when bulk loading tables. Do any such guides exist so I can focus on tuning the most run and/or underperforming queries, hopefully without adversely affecting the currently well-performing queries? I'd even be happy to find something that compared how certain types of queries performed relative to other databases, so I had a better clue of what sort of things to avoid. update: I should've mentioned, I took all of the Oracle DBA classes along with their data modeling and SQL tuning classes back in the 8i days ... so I know about 'EXPLAIN', but that's more to tell you what's going wrong with the query, not necessarily how to make it better. (eg, are 'while var=1 or var=2' and 'while var in (1,2)' considered the same when generating an execution plan? What if I'm doing it with 10 permutations? When are multi-column indexes used? Are there ways to get the planner to optimize for fastest start vs. fastest finish? What sort of 'gotchas' might I run into when moving from mySQL, Oracle or some other RDBMS?) I could write any complex query dozens if not hundreds of ways, and I'm hoping to not have to try them all and find which one works best through trial and error. I've already found that 'SELECT count(*)' won't use an index, but 'SELECT count(primary_key)' will ... maybe a 'PostgreSQL for experienced SQL users' sort of document that explained sorts of queries to avoid, and how best to re-write them, or how to get the planner to handle them better. update 2: I found a Comparison of different SQL Implementations which covers PostgreSQL, DB2, MS-SQL, mySQL, Oracle and Informix, and explains if, how, and gotchas on things you might try to do, and his references section linked to Oracle / SQL Server / DB2 / Mckoi /MySQL Database Equivalents (which is what its title suggests) and to the wikibook SQL Dialects Reference which covers whatever people contribute (includes some DB2, SQLite, mySQL, PostgreSQL, Firebird, Vituoso, Oracle, MS-SQL, Ingres, and Linter).

    Read the article

  • Django + FastCGI - randomly raising OperationalError

    - by ibz
    I'm running a Django application. Had it under Apache + mod_python before, and it was all OK. Switched to Lighttpd + FastCGI. Now I randomly get the following exception (neither the place nor the time where it appears seem to be predictable). Since it's random, and it appears only after switching to FastCGI, I assume it has something to do with some settings. Found a few results when googleing, but they seem to be related to setting maxrequests=1. However, I use the default, which is 0. Any ideas where to look for? PS. I'm using PostgreSQL. Might be related to that as well, since the exception appears when making a database query. Thanks. File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line 86, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 140, in root if not self.has_permission(request): File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 99, in has_permission return request.user.is_authenticated() and request.user.is_staff File "/usr/lib/python2.6/site-packages/django/contrib/auth/middleware.py", line 5, in __get__ request._cached_user = get_user(request) File "/usr/lib/python2.6/site-packages/django/contrib/auth/__init__.py", line 83, in get_user user_id = request.session[SESSION_KEY] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 46, in __getitem__ return self._session[key] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 172, in _get_session self._session_cache = self.load() File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/db.py", line 16, in load expire_date__gt=datetime.datetime.now() File "/usr/lib/python2.6/site-packages/django/db/models/manager.py", line 93, in get return self.get_query_set().get(*args, **kwargs) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 304, in get num = len(clone) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 160, in __len__ self._result_cache = list(self.iterator()) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 275, in iterator for row in self.query.results_iter(): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 206, in results_iter for rows in self.execute_sql(MULTI): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 1734, in execute_sql cursor.execute(sql, params) OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

    Read the article

  • WSIT, Maven, and wsimport -- Can They Work Together?

    - by rtperson
    Hi all, I'm working on a small-ish multi-module project in Maven. We've separated the UI from the database layer using Web Services, and thanks to the jaxws-maven-plugin, the creation of the WSDL and WS client are more or less handled for us. (The plugin is essentially a wrapper around wsgen and wsimport.) So far so good. The problem comes when I try to layer WSIT security into the picture. NetBeans allows me to generate the security metadata easily, but wsimport seems completely incapable of dealing with anything beyond a Basic-auth level of security. Here's our current, insecure way of calling wsimport during a Maven build: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>jaxws-maven-plugin</artifactId> <version>1.10</version> <executions> <execution> <goals> <goal>wsimport</goal> </goals> <configuration> <wsdlUrls> <wsdlUrl>${basedir}/../WebService/target/jaxws/wsgen/wsdl/WebService.wsdl</wsdlUrl> </wsdlUrls> <packageName>com.yourcompany.appname.ws.client</packageName> <sourceDestDir>${basedir}/src/main/java</sourceDestDir> <destDir>${basedir}/target/jaxws</destDir> </configuration> </execution> </executions> </plugin> I have tried playing around with xauthFile, xadditionalHeaders, passing javax.xml.ws.security.auth.username and password through args. I have also tried using wsimport from the command line to point to the Tomcat-generated WSDL, which has the additional security info. Nothing, however, seems to change the composition of the wsimport-generated files at all. So I guess my question here is, to get a WSIT-compliant client, am I stuck abandoning Maven and the jaxws plugin altogether? Is there a way to get a WSIT client to auto-generate? Or will I need to generate the client by hand? Let me know if you need any additional info beyond what I've written here. I'm deploying to Tomcat, although that doesn't seem to be an issue, as Maven seems happy to pull Metro into the deployed WAR file. Thanks in advance!

    Read the article

  • Finding minimum cut-sets between bounded subgraphs

    - by Tore
    If a game map is partitioned into subgraphs, how to minimize edges between subgraphs? I have a problem, Im trying to make A* searches through a grid based game like pacman or sokoban, but i need to find "enclosures". What do i mean by enclosures? subgraphs with as few cut edges as possible given a maximum size and minimum size for number of vertices for each subgraph that act as a soft constraints. Alternatively you could say i am looking to find bridges between subgraphs, but its generally the same problem. Given a game that looks like this, what i want to do is find enclosures so that i can properly find entrances to them and thus get a good heuristic for reaching vertices inside these enclosures. So what i want is to find these colored regions on any given map. My Motivation The reason for me bothering to do this and not just staying content with the performance of a simple manhattan distance heuristic is that an enclosure heuristic can give more optimal results and i would not have to actually do the A* to get some proper distance calculations and also for later adding competitive blocking of opponents within these enclosures when playing sokoban type games. Also the enclosure heuristic can be used for a minimax approach to finding goal vertices more properly. A possible solution to the problem is the Kernighan-Lin algorithm: function Kernighan-Lin(G(V,E)): determine a balanced initial partition of the nodes into sets A and B do A1 := A; B1 := B compute D values for all a in A1 and b in B1 for (i := 1 to |V|/2) find a[i] from A1 and b[i] from B1, such that g[i] = D[a[i]] + D[b[i]] - 2*c[a][b] is maximal move a[i] to B1 and b[i] to A1 remove a[i] and b[i] from further consideration in this pass update D values for the elements of A1 = A1 / a[i] and B1 = B1 / b[i] end for find k which maximizes g_max, the sum of g[1],...,g[k] if (g_max > 0) then Exchange a[1],a[2],...,a[k] with b[1],b[2],...,b[k] until (g_max <= 0) return G(V,E) My problem with this algorithm is its runtime at O(n^2 * lg(n)), i am thinking of limiting the nodes in A1 and B1 to the border of each subgraph to reduce the amount of work done. I also dont understand the c[a][b] cost in the algorithm, if a and b do not have an edge between them is the cost assumed to be 0 or infinity, or should i create an edge based on some heuristic. Do you know what c[a][b] is supposed to be when there is no edge between a and b? Do you think my problem is suitable to use a multi level problem? Why or why not? Do you have a good idea for how to reduce the work done with the kernighan-lin algorithm for my problem?

    Read the article

  • PHP: Can pcntl_alarm() and socket_select() peacefully exist in the same thread?

    - by DWilliams
    I have a PHP CLI script mostly written that functions as a chat server for chat clients to connect to (don't ask me why I'm doing it in PHP, thats another story haha). My script utilizes the socket_select() function to hang execution until something happens on a socket, at which point it wakes up, processes the event, and waits until the next event. Now, there are some routine tasks that I need performed every 30 seconds or so (check of tempbanned users should be unbanned, save user databases, other assorted things). From what I can tell, PHP doesn't have very great multi-threading support at all. My first thought was to compare a timestamp every time the socket generates an event and gets the program flowing again, but this is very inconsistent since the server could very well sit idle for hours and not have any of my cleanup routines executed. I came across the PHP pcntl extensions, and it lets me use assign a time interval for SIGALRM to get sent and a function get executed every time it's sent. This seems like the ideal solution to my problem, however pcntl_alarm() and socket_select() clash with each other pretty bad. Every time SIGALRM is triggered, all sorts of crazy things happen to my socket control code. My program is fairly lengthy so I can't post it all here, but it shouldn't matter since I don't believe I'm doing anything wrong code-wise. My question is: Is there any way for a SIGALRM to be handled in the same thread as a waiting socket_select()? If so, how? If not, what are my alternatives here? Here's some output from my program. My alarm function simply outputs "Tick!" whenever it's called to make it easy to tell when stuff is happening. This is the output (including errors) after allowing it to tick 4 times (there were no actual attempts at connecting to the server despite what it says): [05-28-10 @ 20:01:05] Chat server started on 192.168.1.28 port 4050 [05-28-10 @ 20:01:05] Loaded 2 users from file PHP Notice: Undefined offset: 0 in /home/danny/projects/PHPChatServ/ChatServ.php on line 112 PHP Warning: socket_select(): unable to select [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 116 [05-28-10 @ 20:01:15] Tick! PHP Warning: socket_accept(): unable to accept incoming connection [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 126 [05-28-10 @ 20:01:25] Tick! PHP Warning: socket_getpeername() expects parameter 1 to be resource, boolean given in /home/danny/projects/PHPChatServ/ChatServ.php on line 129 [05-28-10 @ 20:01:25] Accepting socket connection from PHP Notice: Undefined offset: 1 in /home/danny/projects/PHPChatServ/ChatServ.php on line 112 PHP Warning: socket_select(): unable to select [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 116 [05-28-10 @ 20:01:35] Tick! PHP Warning: socket_accept(): unable to accept incoming connection [4]: Interrupted system call in /home/danny/projects/PHPChatServ/ChatServ.php on line 126 [05-28-10 @ 20:01:45] Tick! PHP Warning: socket_getpeername() expects parameter 1 to be resource, boolean given in /home/danny/projects/PHPChatServ/ChatServ.php on line 129 [05-28-10 @ 20:01:45] Accepting socket connection from PHP Notice: Undefined offset: 2 in /home/danny/projects/PHPChatServ/ChatServ.php on line 112

    Read the article

  • Is DataRow thread safe? How to update a single datarow in a datatable using multiple threads? - .net

    - by NLV
    Hello all I want to update a single datarow in a datatable using multiple threads. Is this actually possible? I've written the following code implementing a simple multi-threading to update a single datarow. I get different results each time. Why is it so? public partial class Form1 : Form { private static DataTable dtMain; private static string threadMsg = string.Empty; public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { Thread[] thArr = new Thread[5]; dtMain = new DataTable(); dtMain.Columns.Add("SNo"); DataRow dRow; dRow = dtMain.NewRow(); dRow["SNo"] = 5; dtMain.Rows.Add(dRow); dtMain.AcceptChanges(); ThreadStart ts = new ThreadStart(delegate { dtUpdate(); }); thArr[0] = new Thread(ts); thArr[1] = new Thread(ts); thArr[2] = new Thread(ts); thArr[3] = new Thread(ts); thArr[4] = new Thread(ts); thArr[0].Start(); thArr[1].Start(); thArr[2].Start(); thArr[3].Start(); thArr[4].Start(); while (!WaitTillAllThreadsStopped(thArr)) { Thread.Sleep(500); } foreach (Thread thread in thArr) { if (thread != null && thread.IsAlive) { thread.Abort(); } } dgvMain.DataSource = dtMain; } private void dtUpdate() { for (int i = 0; i < 1000; i++) { try { dtMain.Rows[0][0] = Convert.ToInt32(dtMain.Rows[0][0]) + 1; dtMain.AcceptChanges(); } catch { continue; } } } private bool WaitTillAllThreadsStopped(Thread[] threads) { foreach (Thread thread in threads) { if (thread != null && thread.ThreadState == ThreadState.Running) { return false; } } return true; } } Any thoughts on this? Thank you NLV

    Read the article

  • How to honor/inherit user's language settings in WinForm app

    - by msorens
    I have worked with globalization settings in the past but not within the .NET environment, which is the topic of this question. What I am seeing is most certainly due to knowledge I have yet to learn so I would appreciate illumination on the following. Setup: My default language setting is English (en-us specifically). I added a second language (Danish) on my development system (WinXP) and then opened the language bar so I could select either at will. I selected Danish on the language bar then opened Notepad and found the language reverted to English on the language bar. I understand that the language setting is per application, so it seemed that Notepad set the default back to English. (I found that strange since Windows and thus Notepad is used all over the world.) Closing Notepad returned the setting on the language bar to Danish. I then launched my open custom WinForm application--which I know does not set the language--and it also reverted from English to Danish when opened, then back to Danish when terminated! Question #1A: How do I get my WinForm application upon launch to inherit the current setting of the language bar? My experiment seems to indicate that each application starts with the system default and requires the user to manually change it once the app is running--this would seem to be a major inconvenience for anyone that wants to work with more than one language! Question #1B: If one must, in fact, set the language manually in a multi-language scenario, how do I change my default system language (e.g. to Danish) so I can test my app's launch in another language? I added a display of the current language in my application for this next experiment. Specifically I set a MouseEnter handler on a label that set its tooltip to CultureInfo.CurrentCulture.Name so each time I mouse over I thought I should see the current language setting. Since setting the language before I launch my app did not work, I launched it then set the language to Danish. I found that some things (like typing in a TextBox) did honor this Danish setting. But mousing over the instrumented label still showed en-us! Question #2A: Why does CultureInfo.CurrentCulture.Name not reflect the change from my language bar while other parts of my app seem to recognize the change? (Trying CultureInfo.CurrentUICulture.Name produced the same result.) Question #2B: Is there an event that fires upon changes on the language bar so I could recognize within my app when the language setting changes?

    Read the article

  • Singleton: How should it be used

    - by Loki Astari
    Edit: From another question I provided an answer that has links to a lot of questions/answers about singeltons: More info about singletons here: So I have read the thread Singletons: good design or a crutch? And the argument still rages. I see Singletons as a Design Pattern (good and bad). The problem with Singleton is not the Pattern but rather the users (sorry everybody). Everybody and their father thinks they can implement one correctly (and from the many interviews I have done, most people can't). Also because everybody thinks they can implement a correct Singleton they abuse the Pattern and use it in situations that are not appropriate (replacing global variables with Singletons!). So the main questions that need to be answered are: When should you use a Singleton How do you implement a Singleton correctly My hope for this article is that we can collect together in a single place (rather than having to google and search multiple sites) an authoritative source of when (and then how) to use a Singleton correctly. Also appropriate would be a list of Anti-Usages and common bad implementations explaining why they fail to work and for good implementations their weaknesses. So get the ball rolling: I will hold my hand up and say this is what I use but probably has problems. I like "Scott Myers" handling of the subject in his books "Effective C++" Good Situations to use Singletons (not many): Logging frameworks Thread recycling pools /* * C++ Singleton * Limitation: Single Threaded Design * See: http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf * For problems associated with locking in multi threaded applications * * Limitation: * If you use this Singleton (A) within a destructor of another Singleton (B) * This Singleton (A) must be fully constructed before the constructor of (B) * is called. */ class MySingleton { private: // Private Constructor MySingleton(); // Stop the compiler generating methods of copy the object MySingleton(MySingleton const& copy); // Not Implemented MySingleton& operator=(MySingleton const& copy); // Not Implemented public: static MySingleton& getInstance() { // The only instance // Guaranteed to be lazy initialized // Guaranteed that it will be destroyed correctly static MySingleton instance; return instance; } }; OK. Lets get some criticism and other implementations together. :-)

    Read the article

  • Minimalistic tools for developer documentation

    - by Pekka
    I am currently working on a large PHP CMS / Framework and documenting it extensively as I go along. In addition to phpdoc-style inline comments, I need to document XML structures, details on concepts and practices, write HOWTOs and so on. At the moment, I am using simple OpenOffice documents for that, but I'm unhappy with it and looking for a "real" documentation system. So, I am looking for recommendations for robust, minimalistic, easy-to-use documentation software. I have tried a number of Wikis, most prominently Dokuwiki. I like the open-minded approach, the freedom in editing, and the simplicity, but they provide little support in structuring a multi-chapter documentation, and make basic reorganisation tasks very difficult (e.g. moving pages to a different namespace). Working with the plugins is Cumbersome, and they are not really easy to use. Open Source would be a plus but is not a requirement. Thanks for all the suggestions. I have not had time to look into each one in detail. I will be trying Sphinx, especially because it provides so much support for a good structure. I may update this post later when I'm done and report how it worked out. The suggestions Trac's built-in wiki which is great but for my taste provides too little support for keeping a structure - it's perfect though for "normal", smaller size project documentation Markdown my current favourite because of its minimalism, however not sure yet whether maintaining a structure will be easy enough. A Markdown-Based system would of course be very easy to extend, e.g. to look up cross references from the project's code base. Of course it would be great to find something that already has that out of the box. The DocBook format and to edit, the commercial Oxygen XML Editor - a great standard for building documentation, no doubt. Maybe too "technical" for my purposes as I need something to open quickly, write into and go on coding. Still always worth a mention. Sphinx an Open Source, Python based documentation generator, promising structured documentation and extensive cross-referencing. Interesting and will take a look. Confluence a commercial but very affordable Wiki. XWiki, an Open Source playing in Confluence's league with numerous extensions and connectors to Eclipse and Microsoft Office. TiddlyWiki an open-source Wiki.

    Read the article

  • SHAREPOINT: Custom Field type property storage defined for custom field

    - by Eric Rockenbach
    ok here is a great question. I have a set of generic custom fields that are highly configurable from an end user perspective and the configuration is getting overbearing as there are nearly 100 plus items each custom field allows you to perform in the areas of Server/Client Validation, Server/Client Events/Actions, Server/Client Bindings parent/child, display properties for form/control, etc, etc. Right now I'm storing most of these values as "Text" in my field xml for my propertyschema. I'm very familiar with the multi column value, but this is not a complex custom type in sense it's an array. I also considered creating serilzable objects and stuffing them into the text field and then pulling out and de-serilizing them when editing through the field editor or acting on the rules through the custom spfield. So I'm trying to take the following for example <PropertySchema> <Fields> <Field Name="EntityColumnName" Hidden="TRUE" DisplayName="EntityColumnName" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntityColumnParentPK" Hidden="TRUE" DisplayName="EntityColumnParentPK" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntityColumnValueName" Hidden="TRUE" DisplayName="EntityColumnValueName" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntityListName" Hidden="TRUE" DisplayName="EntityListName" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntitySiteUrl" Hidden="TRUE" DisplayName="EntitySiteUrl" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> </Fields> <PropertySchema> And turn it into this... <PropertySchema> <Fields> <Field Name="ServerValidationRules" Hidden="TRUE" DisplayName="ServerValidationRules" Type="ServerValidationRulesType"> <default></default> </Field> </Fields> <PropertySchema> Ideas?????

    Read the article

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

  • Can Hudson branch promotion get based on project stability?

    - by Wayne
    Hudson CI server displays stability "weather" which is cool. And it allows one project build to kick off based on the successful build of another. However, how can you make that secondary project dependent additionally on the stability of multiple builds of the first project? Specifically, project "stable_deploy" needs to only kick off to promote a version to "stable" if project "integrate" with version 8.3.4.1233 has built and tested successfully at least 8 times--in a row. Until then, it's still in integration mode. IMPORTANT: A significant caveat to this is that a single set of Hudson projects gets used as a "pipeline" to process each new version through to release. So a project may have built successfully 8 times in a rolw but the latest version 8.3.4.1233 may be only the 2 most recent builds. The builds prior to that may be an earlier version. We're open to completely reorganizing this but the pipeline idea seemed to greatly reduce the amount of manually project creation and deletion. Is there a better way to track version release "pipeline"? In particular, we will have multiple versions in this pipeline simultaneously in the future due to fixes or patches to older versions. We don't see how to do that yet, except to create new pipeline projects for each version which is a real hassle. Here's some background details: The TickZoom application has some very complete unit tests some of which simulates real time trading environments. Add to that TickZoom makes elaborate use of parallelization for leveraging multi-core computers. Needless to say, during development of a new version, there can be stability issues during integration testing which get uncovered by running the build and auto tests repeatedly. A version which builds and tests cleanly 8 times in a row without change plus has undergone some real world testing by users can be deemed "stable" and promoted to the stable branch. Our Hudson projects look like this: test - Only for testing a build, zero user visibility. integrate_deploy - Promotes a test project build to integrate branch and makes it available to public for UA testing. integrate - Repeatedly builds the integrate branch to determine if it's stable enough to promote to stable branch. This runs the builds and test hourly throughout every night. stable_deploy - Promotes an integrate project build to the stable branch and makes it public for users who want the latest and greatest. stable - Builds the stable branch once every night. After 2 weeks of successful builds (14 builds) it can go to "release candidate". And so on... it continues with "release candidate" and then "release".

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >