Search Results

Search found 3592 results on 144 pages for 'pointer'.

Page 129/144 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Server 2003 R2 doesn't allow logon after a few days of uptime

    - by Bryan
    We have a server 2003 R2 standard (which I'll refer to as SRV01) that's knocking on a bit now, but it still acts as a file, print and SQL server on our company's network. SRV01 hosts user profiles, home directories and pretty much all our business data. Note our AD is currently at 2008 R2 level. This server is due to be upgraded in the next 12 months, but I've no budget to spend on it just yet. A bit of history of this server follows: When SRV01 was first commissioned, it acted as a domain controller (with the same 2003 R2 install it has today), paired with another server that ran Server 2003 R2 SBS. A few years ago, we purchased a pair of dedicated DCs (2008 R2) and at this point we decommissioned the 2003 SBS server, and SRV01 was DCPROMOed out of the AD. Up until very recently, SRV01 used to run Exchange 2003, however we've recently purchased a dedicated server for Exchange 2010 and upgraded (following Microsoft recommended upgrade path). Exchange 2003 was recently uninstalled. - Cleanly to the best of my knowledge. Ever since Exchange was removed from SRV01, I'm finding that after a few days of uptime, when I attempt to logon, pressing CTRL-ALT-DEL just hides the Welcome to Windows Server 2003 banner, and never presents the logon dialog. All I see is a moveable mouse pointer and a blank background. It's a similar story with an admin TS session, the RDP client connects and gives me a blank background, but no logon dialog is presented. The RDP session indefinitely hangs until I give up and close it. The only way I've been able to gain access to the server is to pull the plug on it. Whilst the server does have a battery backed up RAID 5 controller, I'm unhappy about having to do this, so as a temporary measure, I've created a scheduled job to reboot SRV01 each night. Not only do I not like the idea of scheduling a reboot of a server like this, but it is also causing problems for users that leave desktop PCs left logged on overnight. Users complain of 'Delayed Write Failures', and there has also been a number of users that have started to complain about account lockout problems, as well as users not able to connect to shares on SRV01 until they reboot their desktop PCs. I've examined event logs on SRV01 and on the DCs looking for clues as to what the problem is, but there really is nothing untoward being logged. How could I being to investigate this problem when nothing of any relevance is being logged? Is there some additional logging that can be enabled that might give some clues as to what could be causing this problem? Could performance monitor help me out here, and if so, what counters would you consider monitoring? It's worth mentioning that whilst the server is unresponsive via the console and TS, it does still respond to clients connecting to shares without problems for several days, but after about a week I then start to hear users reporting problems accessing shares, but this seems quite sporadic. I've also tried leaving the console logged on (and locked), when I notice I can no longer logon via TS, I can unlock the server console without problem, but it refuses to reboot/shutdown, and subsequent attempts to reboot report that a system shutdown is already in progress and the system then completely hangs. I've tried playing the waiting game for several hours thinking that a timeout might allow the shutdown to continue, but to no avail.

    Read the article

  • Windows 7 BSOD - ntoskrnl?

    - by Ken Mason
    2 new HP Pavilion notebooks with 7 Home Premium pre-loaded with Norton. My first act was to use the Norton Removal Tool and load ZoneAlarm free and AVG Free. Frequent random BSOD's ever since...I found my way into Debug and have had various reports regarding ntoskrnl, depending on the status of symbols. It's been many years since I played with (DOS 3.x) debug, so this has been a considerable fumble. Excerpts follow and any insights would be greatly appreciated, as I am not a developer: ADDITIONAL_DEBUG_TEXT: Use '!findthebuild' command to search for the target build information. If the build information is available, run '!findthebuild -s ; .reload' to set symbol path and load symbols. MODULE_NAME: nt FAULTING_MODULE: fffff8000305d000 nt DEBUG_FLR_IMAGE_TIMESTAMP: 4b88cfeb BUGCHECK_STR: 0x7f_8 CUSTOMER_CRASH_COUNT: 1 DEFAULT_BUCKET_ID: VISTA_DRIVER_FAULT CURRENT_IRQL: 0 LAST_CONTROL_TRANSFER: from fffff800030ccb69 to fffff800030cd600 STACK_TEXT: fffff80004d6fd28 fffff800030ccb69 : 000000000000007f 0000000000000008 0000000080050033 00000000000006f8 : nt+0x70600 fffff80004d6fd30 000000000000007f : 0000000000000008 0000000080050033 00000000000006f8 fffff80003095e58 : nt+0x6fb69 fffff80004d6fd38 0000000000000008 : 0000000080050033 00000000000006f8 fffff80003095e58 0000000000000000 : 0x7f fffff80004d6fd40 0000000080050033 : 00000000000006f8 fffff80003095e58 0000000000000000 0000000000000000 : 0x8 fffff80004d6fd48 00000000000006f8 : fffff80003095e58 0000000000000000 0000000000000000 0000000000000000 : 0x80050033 fffff80004d6fd50 fffff80003095e58 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : 0x6f8 fffff80004d6fd58 0000000000000000 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt+0x38e58 STACK_COMMAND: kb FOLLOWUP_IP: nt+70600 fffff800`030cd600 48894c2408 mov qword ptr [rsp+8],rcx SYMBOL_STACK_INDEX: 0 SYMBOL_NAME: nt+70600 FOLLOWUP_NAME: MachineOwner IMAGE_NAME: ntoskrnl.exe BUCKET_ID: WRONG_SYMBOLS Followup: MachineOwner ...................................................................... 0: kd !lmi nt Loaded Module Info: [nt] Module: ntkrnlmp Base Address: fffff8000305d000 Image Name: ntkrnlmp.exe Machine Type: 34404 (X64) Time Stamp: 4b88cfeb Sat Feb 27 00:55:23 2010 Size: 5dc000 CheckSum: 545094 Characteristics: 22 perf Debug Data Dirs: Type Size VA Pointer CODEVIEW 25, 19c65c, 19bc5c RSDS - GUID: {7E9A3CAB-6268-45DE-8E10-816E3080A3B7} Age: 2, Pdb: ntkrnlmp.pdb CLSID 4, 19c658, 19bc58 [Data not mapped] Image Type: FILE - Image read successfully from debugger. ntkrnlmp.exe Symbol Type: PDB - Symbols loaded successfully from symbol server. d:\debugsymbols\ntkrnlmp.pdb\7E9A3CAB626845DE8E10816E3080A3B72\ntkrnlmp.pdb Load Report: public symbols , not source indexed d:\debugsymbols\ntkrnlmp.pdb\7E9A3CAB626845DE8E10816E3080A3B72\ntkrnlmp.pdb 0: kd !analyze -v * Bugcheck Analysis * * UNEXPECTED_KERNEL_MODE_TRAP (7f) This means a trap occurred in kernel mode, and it's a trap of a kind that the kernel isn't allowed to have/catch (bound trap) or that is always instant death (double fault). The first number in the bugcheck params is the number of the trap (8 = double fault, etc) Consult an Intel x86 family manual to learn more about what these traps are. Here is a portion of those codes: If kv shows a taskGate use .tss on the part before the colon, then kv. Else if kv shows a trapframe use .trap on that value Else .trap on the appropriate frame will show where the trap was taken (on x86, this will be the ebp that goes with the procedure KiTrap) Endif kb will then show the corrected stack. Arguments: Arg1: 0000000000000008, EXCEPTION_DOUBLE_FAULT Arg2: 0000000080050033 Arg3: 00000000000006f8 Arg4: fffff80003095e58 Debugging Details: BUGCHECK_STR: 0x7f_8 CUSTOMER_CRASH_COUNT: 1 DEFAULT_BUCKET_ID: VISTA_DRIVER_FAULT PROCESS_NAME: System CURRENT_IRQL: 2 LAST_CONTROL_TRANSFER: from fffff800030ccb69 to fffff800030cd600 STACK_TEXT: fffff80004d6fd28 fffff800030ccb69 : 000000000000007f 0000000000000008 0000000080050033 00000000000006f8 : nt!KeBugCheckEx fffff80004d6fd30 fffff800030cb032 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiBugCheckDispatch+0x69 fffff80004d6fe70 fffff80003095e58 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiDoubleFaultAbort+0xb2 fffff880089efc60 0000000000000000 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!SeAccessCheckFromState+0x58 STACK_COMMAND: kb FOLLOWUP_IP: nt!KiDoubleFaultAbort+b2 fffff800`030cb032 90 nop SYMBOL_STACK_INDEX: 2 SYMBOL_NAME: nt!KiDoubleFaultAbort+b2 FOLLOWUP_NAME: MachineOwner MODULE_NAME: nt IMAGE_NAME: ntkrnlmp.exe DEBUG_FLR_IMAGE_TIMESTAMP: 4b88cfeb FAILURE_BUCKET_ID: X64_0x7f_8_nt!KiDoubleFaultAbort+b2 BUCKET_ID: X64_0x7f_8_nt!KiDoubleFaultAbort+b2 Followup: MachineOwner I tried running Rootkit Revealer but I don't think it works on x64 systems. Similarly Blacklight seems to have aged off. I'm running Sophos Anti-Rootkit now. So far so good...

    Read the article

  • Is it possible to repair a Cisco 3500 XL (3548) switch with POST Error messages?

    - by Alex
    I've got an old Cisco 3500 XL, and it seems to have hardware issues. I've loaded the latest IOS and cleared all config. Does anyone have any experience fixing the switch core? I'm a reasonably competent SMD solderer, can I replace/reflow some chips? I've checked the power supply voltages and it's all within tolerance, and no visible signs of any component damage. Some chips are hot to the touch. I understand that these were EOL as of 2007, but should have a lifetime warranty for the electronics. I don't have a Cisco support contract, so I can't file a ticket. What should I do? Console output: switch: dir flash: Directory of flash:/ 2 -rwx 1811584 <date> c3500xl-c3h2s-mz.120-5.WC17.bin 1799680 bytes available (1812992 bytes used) switch: boot Loading "flash:c3500xl-c3h2s-mz.120-5.WC17.bin"...################################################################################################################################################################################### File "flash:c3500xl-c3h2s-mz.120-5.WC17.bin" uncompressed and installed, entry point: 0x3000 executing... Restricted Rights Legend Use, duplication, or disclosure by the Government is subject to restrictions as set forth in subparagraph (c) of the Commercial Computer Software - Restricted Rights clause at FAR sec. 52.227-19 and subparagraph (c) (1) (ii) of the Rights in Technical Data and Computer Software clause at DFARS sec. 252.227-7013. cisco Systems, Inc. 170 West Tasman Drive San Jose, California 95134-1706 Cisco Internetwork Operating System Software IOS (tm) C3500XL Software (C3500XL-C3H2S-M), Version 12.0(5)WC17, RELEASE SOFTWARE (fc1) Copyright (c) 1986-2007 by cisco Systems, Inc. Compiled Tue 13-Feb-07 15:04 by antonino Image text-base: 0x00003000, data-base: 0x00352924 Initializing C3500XL flash... flashfs[1]: 1 files, 1 directories flashfs[1]: 0 orphaned files, 0 orphaned directories flashfs[1]: Total bytes: 3612672 flashfs[1]: Bytes used: 1812992 flashfs[1]: Bytes available: 1799680 flashfs[1]: flashfs fsck took 3 seconds. flashfs[1]: Initialization complete. ...done Initializing C3500XL flash. C3500XL POST: System Board Test: Passed C3500XL POST: Daughter Card Test: Passed C3500XL POST: CPU Buffer Test: Passed C3500XL POST: CPU Notify RAM Test: Passed C3500XL POST: CPU Interface Test: Passed C3500XL POST: Testing Switch Core: Passed Error with Switch Core BIST test Phase 0. Returns: Test Complete Low : 0x0FFFFFFF, Test Complete High : 0xFFFFFFFE Test Phase Low : 0x00000040, Test Phase High : 0x00000000 Test Phase Third : 0x00000000, Test Complete Third : 0x000001F8 C3500XL POST FAILURE: Testing Switch Core: Failed C3500XL POST FAILURE: Testing Buffer Table: Failed C3500XL POST FAILURE: Data Buffer Test: Failed C3500XL POST FAILURE: Configuring Switch Parameters: Failed C3500XL POST FAILURE: Switch Core BIST failed. C3500XL POST FAILURE: Cannot test Modules due to failure of Switch Core POST Del Mar Failure (0th Del Mar): req system failed to init C3500XL POST FAILURE: C3500XL POST FAILURE: ATM: required system failed to init C3500XL POST: Ethernet Controller Test: Passed C3500XL POST FAILURE: MII Test: Failed C3500XL POST FAILURE: Error waiting for Ethernet Controller and SW_PARAMS C3500XL POST FAILURE: Initialization/POST failed C3500XL POST FAILURE: AT: Failing because system POST failed Exception (8192)! Debug Exception (Could be NULL pointer dereference) CPU Register Context: Vector = 0x00002000 PC = 0x000F36F4 MSR = 0x00029200 CR = 0x22000024 LR = 0x000F6964 CTR = 0x001DE46C XER = 0x00000000 R0 = 0x00000000 R1 = 0x004E2580 R2 = 0x00000000 R3 = 0x00000000 R4 = 0x00000001 R5 = 0x00000000 R6 = 0x004E2718 R7 = 0x004E2718 R8 = 0x00000008 R9 = 0x00000000 R10 = 0x0000FFFF R11 = 0x00480000 R12 = 0x42000024 R13 = 0x00000000 R14 = 0x00000000 R15 = 0x00000000 R16 = 0x00000000 R17 = 0x00000000 R18 = 0x00000000 R19 = 0x00000000 R20 = 0x00000000 R21 = 0x00000000 R22 = 0x00000000 R23 = 0x00000000 R24 = 0x00000000 R25 = 0x00000020 R26 = 0x004E2718 R27 = 0x004E2718 R28 = 0x00000020 R29 = 0x00002513 R30 = 0x00000001 R31 = 0x00000000 Stack trace: PC = 0x000F36F4, SP = 0x004E2580 Frame 00: SP = 0x004E25A0 PC = 0x40000016 Frame 01: SP = 0x004E2618 PC = 0x000F6964 Frame 02: SP = 0x004E26A8 PC = 0x000F76DC Frame 03: SP = 0x004E26C8 PC = 0x000E8114 Frame 04: SP = 0x004E26F0 PC = 0x001F5BF8 Frame 05: SP = 0x004E2710 PC = 0x001F5CF4 Frame 06: SP = 0x004E2748 PC = 0x0023F4DC Frame 07: SP = 0x004E2750 PC = 0x0023E650 Frame 08: SP = 0x004E27C8 PC = 0x0023E89C Frame 09: SP = 0x004E27E0 PC = 0x0028AF34 Frame 10: SP = 0x004E27E8 PC = 0x001E38F8 Frame 11: SP = 0x004E2808 PC = 0x001E39A8 Frame 12: SP = 0x004E2820 PC = 0x0014E220 Frame 13: SP = 0x004E28C8 PC = 0x0014E39C Frame 14: SP = 0x00000000 PC = 0x001EB510

    Read the article

  • Exchange 2003 mail non-delivery (NDR), spam activity? events 7002 & 7004

    - by HighTechGeek
    Windows Server 2003 Small Business Server SP2 Exchange Version 6.5 (Build 7638.2: Service Pack 2) This network has been neglected and has been having email problems for years and was on many blacklists. I was called in after the server eventually crashed... I got the server back up and running, but email problems persist. Outgoing mail delivery is sporadic. Sometimes the mail goes through, sometimes a delayed delivery report is generated after a day or more, and sometimes it seems to go through, but the recipient never receives it. Not sure if spammers are successfully using the server as a relay (see event entries below after turning on maximum SMTP logging)... User PCs infected with viruses and server was blacklisted on many sites (I used mxtoolbox.com) I have cleaned all the PCs and changed all passwords (including administrator) I have requested removal from all of the blacklists - most have removed the listing, some take more time. I have setup rDNS pointer records with the ISP (Comcast) - that was one reason for some of the blacklistings. I have tested that it's not an open relay using telnet as described here: www.amset.info/exchange/smtp-openrelay.asp I followed the advise of a Spamhaus & Microsoft article to enable maximum SMTP logging. http://www.spamhaus.org/faq/answers.lasso?section=isp%20spam%20issues#320 which directed me to Microsoft KB article 895853, specifically, the part 2/3 down titled: "If mail relay occurs from an account on an Exchange computer that is not configured as an open relay" . The Application Event Log is filling with this type of activity (Event ID 7002, 7002 & 3018 errors): Event Type: Error Event Source: MSExchangeTransport Event Category: SMTP Protocol Event ID: 7004 Date: 1/18/2011 Time: 7:33:29 AM User: N/A Computer: SERVER Description: This is an SMTP protocol error log for virtual server ID 1, connection #621. The remote host "212.52.84.180", responded to the SMTP command "rcpt" with "550 #5.1.0 Address rejected [email protected] ". The full command sent was "RCPT TO: ". This will probably cause the connection to fail. and this: Event Type: Warning Event Source: MSExchangeTransport Event Category: SMTP Protocol Event ID: 7002 Date: 1/18/2011 Time: 7:33:29 AM User: N/A Computer: SERVER Description: This is an SMTP protocol warning log for virtual server ID 1, connection #620. The remote host "212.52.84.170", responded to the SMTP command "rcpt" with "452 Too many recipients received this hour ". The full command sent was "RCPT TO: ". This may cause the connection to fail. or a variant of: Event Type: Warning Event Source: MSExchangeTransport Event Category: SMTP Protocol Event ID: 7002 Date: 1/18/2011 Time: 8:39:21 AM User: N/A Computer: SERVER Description: This is an SMTP protocol warning log for virtual server ID 1, connection #661. The remote host "82.57.200.133", responded to the SMTP command "rcpt" with "421 Service not available - too busy ". The full command sent was "RCPT TO: ". This may cause the connection to fail. also Event Type: Error Event Source: MSExchangeTransport Event Category: NDR Event ID: 3018 Date: 1/18/2011 Time: 9:49:37 AM User: N/A Computer: SERVER Description: A non-delivery report with a status code of 5.4.0 was generated for recipient rfc822;[email protected] (Message-ID ). Causes: This message indicates a DNS problem or an IP address configuration problem Solution: Check the DNS using nslookup or dnsq. Verify the IP address is in IPv4 literal format. Data: 0000: ef 02 04 c0 ï..À Any guidance and/or suggestions and/or tests to perform would be greatly appreciated.

    Read the article

  • Dynamic DataGrid columns in WPF DataGrid based on the underlying set of data (and their type)

    - by StatsMan
    Hello everyone, I've got kind of a conceptual question. I am in the process of wrapping some statistics classes I wrote into WPF. For that I have two DataGrid(-Views, currently in WinForms). In one DataGrid each row represents a column in the other. There I can set-up different variables (as in mathematical/statistical variables) with fields like "Header", "DataType", "ValidationBehaviour", "DisplayType". There I can also set-up how it should be displayed. Some Columns can automatically be set to ComboBoxColumns, some TextBoxColumns, and so on and so forth. So, now once I've set-up these Columns I can go to the other grid and enter my data. I may, for instance, have generated (in grid 1) one Column called "Annual Gross Salary" with input of numerical values. Another Column called "Education" with "0=NoEducation", "1=College Level", "3=Universitary" etc. These labels are displayed as text in the combobox and my statistics engine behind then selects the respective value (0-3) for calculations (i.e. ordinal, nominal variables). Sooo. In WinForms I could basically generate all the columns by hand in code and then add my data in the respective cells/rows. Now in WPF I thought that must be easy to realise. However, yesterday I got started with ICustomPropertyDescriptor which (maybe I was too thick) didn't give me the results I was looking for. Basically, I just need to be able to dynamically generate columns (and rows) with different Layout, Controls (ComboBox, simple Input, DateTimes) based on the data that I have. But I don't really know how to go about it? So here in summary: DataGrid 1 Purpose is to display columns that have been specified in DataGrid 2 In rows, the user can add any kind of data in the rows below the columns that is allowed as to the columns specifications DataGrid 2 Each row in this grid represents a column in DataGrid 1 Contains fields like Name/Header, DataType, Validation Behaviour, Default Value, Data Formatting, etc. Also contains a function to be able to set-up how it should be displayed. The user can select from, for instance, ComboBoxColumn (and also add the available options), DateTime, normal TextBox, CheckBox etc. After finishing adding a row it will automatically appear as a new column in DataGrid 1 I'd appreciate any kind of pointer into the right direction. Thanks very, very much in advance! :)

    Read the article

  • Runtime unhandled exception while executing facedetect.py in opencv

    - by Rupesh Chavan
    When i tried to execute facedetect.py python script from opencv sample example i got the following runtime exception. Can someone please give me some pointer or some clue about the exception and why it is encountering? Here is the stack trace : 'python.exe': Loaded 'C:\Python26\python.exe' 'python.exe': Loaded 'C:\WINDOWS\system32\ntdll.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\kernel32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\python26.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\user32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\gdi32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\advapi32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\rpcrt4.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\shell32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\msvcrt.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\shlwapi.dll' 'python.exe': Loaded 'C:\WINDOWS\WinSxS \x86_Microsoft.VC90.CRT_1fc8b3b9a1e18e3b_9.0.30729.1_x-ww_6f74963e\msvcr90.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\imm32.dll' 'python.exe': Loaded 'C:\WINDOWS\WinSxS\x86_Microsoft.Windows.Common-Controls_6595b64144ccf1df_6.0.2600.2982_x-ww_ac3f9c03\comctl32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\comctl32.dll' 'python.exe': Loaded 'C:\Python26\Lib\site-packages\opencv_cv.pyd', Binary was not built with debug information. 'python.exe': Loaded 'C:\OpenCV2.0\bin\libcv200.dll', Binary was not built with debug information. 'python.exe': Loaded 'C:\OpenCV2.0\bin\libcxcore200.dll', Binary was not built with debug information. 'python.exe': Loaded 'C:\Python26\Lib\site-packages\opencv_ml.pyd', Binary was not built with debug information. 'python.exe': Loaded 'C:\OpenCV2.0\bin\libml200.dll', Binary was not built with debug information. 'python.exe': Loaded 'C:\Python26\Lib\site-packages\opencv_highgui.pyd', Binary was not built with debug information. 'python.exe': Loaded 'C:\OpenCV2.0\bin\libhighgui200.dll', Binary was not built with debug information. 'python.exe': Loaded 'C:\WINDOWS\system32\ole32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\oleaut32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\avicap32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\winmm.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\version.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\msvfw32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\avifil32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\msacm32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\msctf.dll' 'python.exe': Loaded 'C:\OpenCV2.0\bin\libopencv_ffmpeg200.dll', Binary was not built with debug information. 'python.exe': Loaded 'C:\WINDOWS\system32\wsock32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\ws2_32.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\ws2help.dll' 'python.exe': Loaded 'C:\WINDOWS\system32\MSCTFIME.IME' Unhandled exception at 0x00e7e4e4 in python.exe: 0xC0000005: Access violation reading location 0xffffffff. Thanks a lot in advance, Rupesh Chavan

    Read the article

  • Uploading multiple images through Tumblr API

    - by Joseph Carrington
    I have about 300 images I want to upload to my new Tumblr account, becuase my old wordpress site got hacked and I no longer wish to use wordpress. I uploaded one image a day for 300 days, and I'd like to be able to take these images and upload them to my tumblr site using the api. The images are currently local, stored in /images/. They all have the date they were uploaded as the first ten characters of the filename, (01-01-2009-filename.png) and I went to send this date parameter along as well. I want to be able to see the progress of the script by outputting the responses from the API to my error_log. Here is what I have so far, based on the tumblr api page. // Authorization info $tumblr_email = '[email protected]'; $tumblr_password = 'password'; // Tumblr script parameters $source_directory = "images/"; // For each file, assign the file to a pointer here's the first stumbling block. How do I get all of the images in the directory and loop through them? Once I have a for or while loop set up I assume this is the next step $post_data = fopen(dir(__FILE__) . $source_directory . $current_image, 'r'); $post_date = substr($current_image, 0, 10); // Data for new record $post_type = 'photo'; // Prepare POST request $request_data = http_build_query( array( 'email' => $tumblr_email, 'password' => $tumblr_password, 'type' => $post_type, 'data' => $post_data, 'date' => $post_date, 'generator' => 'Multi-file uploader' ) ); // Send the POST request (with cURL) $c = curl_init('http://www.tumblr.com/api/write'); curl_setopt($c, CURLOPT_POST, true); curl_setopt($c, CURLOPT_POSTFIELDS, $request_data); curl_setopt($c, CURLOPT_RETURNTRANSFER, true); $result = curl_exec($c); $status = curl_getinfo($c, CURLINFO_HTTP_CODE); curl_close($c); // Output response to error_log error_log($result); So, I'm stuck on how to use PHP to read a file directory, loop through each of the files, and do things to the name / with the file itself. I also need to know how to set the data parameter, as in choosing multi-part / formdata. I also don't know anything about cURL.

    Read the article

  • Playing an InputStream video in Blackberry JDE.

    - by Jenny
    I think I'm using InputStream incorrectly with a Blackberry 9000 simulator: I found some sample code, http://www.blackberry.com/knowledgecenterpublic/livelink.exe/fetch/2000/348583/800332/1089414/How%5FTo%5F-%5FPlay%5Fvideo%5Fwithin%5Fa%5FBlackBerry%5Fsmartphone%5Fapplication.html?nodeid=1383173&vernum=0 that lets you play video from within a Blackberry App. The code claims it can handle HTTP, but it's taken some fandangling to get it to actually approach doing so: http://pastie.org/609491 Specifically, I'm doing: StreamConnection s = null; s = (StreamConnection)Connector.open("http://10.252.9.15/eggs.3gp"); HttpConnection c = (HttpConnection)s; InputStream i = c.openInputStream(); System.out.println("~~~~~I have a connection?~~~~~~" + c); System.out.println("~~~~~I have a URL?~~~~" + c.getURL()); System.out.println("~~~~~I have a type?~~~~" + c.getType()); System.out.println("~~~~~I have a status?~~~~~~" + c.getResponseCode()); System.out.println("~~~~~I have a stream?~~~~~~" + i); player = Manager.createPlayer(i, c.getType()); I've found that this is the only way I can get an InputStream from an HTTPConnection without causing a: "JUM Error 104: Uncaught NullPointer Exception". (That is, the casting as a StreamConnection, and THEN as an HttpConnection stops it from crashing). However, I'm still not streaming video. Before, a stream wasn't able to be created (it would crash with the null pointer exception). Now, a stream is being made, the debugger claims it's begining to stream video from it...and nothing happens. No video plays. The app doesn't freeze, or crash or anything. I can 'pause' and 'play' freely, and get appropriate debug messages for both. But no video shows up. If I'm playing a video stored locally on the blackberry, everything is fine (it actually plays the video), so I know the Player itself is working fine, I"m just wondering if maybe I have something wrong with my stream? The API says the player can take in an InputStream. Is there a specific kind it needs? How can I query my inputstream to know if it's valid? It existing is further than I've gotten before. -Jenny Edit: I'm on a Blackberry Bold simulator (9000). I've heard that some versions of phones do NOT stream video via HTTP, however, the Bold does. I have yet to see examples of this though. When I go to the internet and point at a blackberry playable video, it attempts to stream, and then asks me to physically download the file (and then plays fine once I download). Edit: Also, I have a physical blackberry Bold, as well, but it can't stream either (I've gone to m.youtube.com, only to get a server/content not found error). Is there something special I need to do to stream RTSP content?

    Read the article

  • sqlite3-ruby can't make on rvm 1.8.7

    - by Josh Crews
    Upgrading to Rails 3 by starting with RVM 1.8.7. OSX 10.5.8 Output: josh-crewss-macbook:~ joshcrews$ gem install sqlite3-rubyBuilding native extensions. This could take a while...ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. /Users/joshcrews/.rvm/rubies/ruby-1.8.7-p174/bin/ruby extconf.rb checking for sqlite3.h... yes checking for sqlite3_libversion_number() in -lsqlite3... yes checking for rb_proc_arity()... no checking for sqlite3_column_database_name()... no checking for sqlite3_enable_load_extension()... no checking for sqlite3_load_extension()... no creating Makefile make gcc -I. -I. -I/Users/joshcrews/.rvm/rubies/ruby-1.8.7-p174/lib/ruby/1.8/i686-darwin9.8.0 -I. -I/usr/local/include -I/opt/local/include -I/usr/include -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE -fno-common -g -O2 -fno-common -pipe -fno-common -O3 -Wall -Wcast-qual -Wwrite-strings -Wconversion -Wmissing-noreturn -Winline -c database.c database.c: In function ‘deallocate’: database.c:17: warning: implicit declaration of function ‘sqlite3_next_stmt’ database.c:17: warning: assignment makes pointer from integer without a cast database.c: In function ‘initialize’: database.c:76: warning: implicit declaration of function ‘sqlite3_open_v2’ database.c:79: error: ‘SQLITE_OPEN_READWRITE’ undeclared (first use in this function) database.c:79: error: (Each undeclared identifier is reported only once database.c:79: error: for each function it appears in.) database.c:79: error: ‘SQLITE_OPEN_CREATE’ undeclared (first use in this function) database.c: In function ‘set_sqlite3_func_result’: database.c:277: error: ‘sqlite3_int64’ undeclared (first use in this function) database.c: In function ‘rb_sqlite3_func’: database.c:311: warning: passing argument 1 of ‘ruby_xcalloc’ as signed due to prototype database.c: In function ‘rb_sqlite3_step’: database.c:378: warning: passing argument 1 of ‘ruby_xcalloc’ as signed due to prototype make: *** [database.o] Error 1 Gem list (these are under RVM, under system I've got lot more gems included the sqlite3-ruby that's worked for 1.5 years) josh-crewss-macbook:~ joshcrews$ gem list *** LOCAL GEMS *** abstract (1.0.0) actionmailer (3.0.0.beta3) actionpack (3.0.0.beta3) activemodel (3.0.0.beta3) activerecord (3.0.0.beta3) activeresource (3.0.0.beta3) activesupport (3.0.0.beta3, 2.3.8) arel (0.3.3) builder (2.1.2) bundler (0.9.25) capybara (0.3.8) configuration (1.1.0) cucumber (0.7.2) cucumber-rails (0.3.1) culerity (0.2.10) database_cleaner (0.5.2) diff-lcs (1.1.2) erubis (2.6.5) ffi (0.6.3) gherkin (1.0.30) i18n (0.4.0, 0.3.7) json_pure (1.4.3) launchy (0.3.5) mail (2.2.1) memcache-client (1.8.3) mime-types (1.16) nokogiri (1.4.2) polyglot (0.3.1) rack (1.1.0) rack-mount (0.6.3) rack-test (0.5.4) rails (3.0.0.beta3) railties (3.0.0.beta3) rake (0.8.7) rdoc (2.5.8) rspec (2.0.0.beta.10, 2.0.0.beta.8) rspec-core (2.0.0.beta.10, 2.0.0.beta.8) rspec-expectations (2.0.0.beta.10, 2.0.0.beta.8) rspec-mocks (2.0.0.beta.10, 2.0.0.beta.8) rspec-rails (2.0.0.beta.10, 2.0.0.beta.8) rubygems-update (1.3.7) selenium-webdriver (0.0.20) spork (0.8.3) term-ansicolor (1.0.5) text-format (1.0.0) text-hyphen (1.0.0) thor (0.13.6) treetop (1.4.8) trollop (1.16.2) tzinfo (0.3.22) webrat (0.7.1) Version of XCode: 3.1.1 My suspicion is it has to do with "-I/Users/joshcrews/.rvm/rubies/ruby-1.8.7-p174/lib/ruby/1.8/i686-darwin9.8.0", because i686-darwin9.8.0 doesnt exist in that file

    Read the article

  • Sharing a UIView between UIViewControllers in a UITabBarController

    - by Wireless Designs
    Hi all - I have a UIScrollView that houses a gallery of images the user can scroll through. This view needs to be visible on each of three separate UIViewControllers that are housed within a UITabBarController. Right now, I have three separate UIScrollView instances in the UITabBarController subclass, and the controller manages keeping the three synchronized (when a user scrolls the one they can see, programmatically scrolling the other two to match, etc.), which is not ideal. I would like to know if there is a way to work with only ONE instance of the UIScrollView, but have it show up only in the UIViewController that the user is currently interacting with. This would completely eliminate all the synchronization code. Here is basically what I have now in the UITabBarController (which is where all this is currently managed): @interface ScrollerTabBarController : UITabBarController { FirstViewController *firstView; SecondViewController *secondView; ThirdViewController *thirdView; UIScrollView *scrollerOne; UIScrollView *scrollerTwo; UIScrollView *scrollerThree; } @property (nonatomic,retain) IBOutlet FirstViewController *firstView; @property (nonatomic,retain) IBOutlet SecondViewController *secondView; @property (nonatomic,retain) IBOutlet ThirdViewController *thirdView; @property (nonatomic,retain) IBOutlet UIScrollView *scrollerOne; @property (nonatomic,retain) IBOutlet UIScrollView *scrollerTwo; @property (nonatomic,retain) IBOutlet UIScrollView *scrollerThree; @end @implementation ScrollerTabBarController - (void)layoutScroller:(UIScrollView *)scroller {} - (void)scrollToMatch:(UIScrollView *)scroller {} - (void)viewDidLoad { [self layoutScroller:scrollerOne]; [self layoutScroller:scrollerTwo]; [self layoutScroller:scrollerThree]; [scrollerOne setDelegate:self]; [scrollerTwo setDelegate:self]; [scrollerThree setDelegate:self]; [firstView setGallery:scrollerOne]; [secondView setGallery:scrollerTwo]; [thirdView setGallery:scrollerThree]; } - (void)scrollViewDidEndDecelerating:(UIScrollView *)scrollView { [self scrollToMatch:scrollView]; } @end The UITabBarController gets notified (as the scroll view's delegate) when the user scrolls one of the instances, and then calls methods like scrollToMatch: to sync up the other two with the user's choice. Is there something that can be done, using a many-to-one relationship on IBOutlet or something like that, to narrow this down to one instance so I'm not having to manage three scroll views? I tried keeping a single instance and moving the pointer from one view to the next using the UITabBarControllerDelegate methods (calling setGallery:nil on the current and setGallery:scrollerOne on the next each time it changed), but the scroller never moved to the other tabs. Thanks in advance!

    Read the article

  • Why is code quality not popular?

    - by Peter Kofler
    I like my code being in order, i.e. properly formatted, readable, designed, tested, checked for bugs, etc. In fact I am fanatic about it. (Maybe even more than fanatic...) But in my experience actions helping code quality are hardly implemented. (By code quality I mean the quality of the code you produce day to day. The whole topic of software quality with development processes and such is much broader and not the scope of this question.) Code quality does not seem popular. Some examples from my experience include Probably every Java developer knows JUnit, almost all languages implement xUnit frameworks, but in all companies I know, only very few proper unit tests existed (if at all). I know that it's not always possible to write unit tests due to technical limitations or pressing deadlines, but in the cases I saw, unit testing would have been an option. If a developer wanted to write some tests for his/her new code, he/she could do so. My conclusion is that developers do not want to write tests. Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects. Usually even compiler warnings like potential null pointer access are ignored. Conference speakers and magazines would talk a lot about EJB3.1, OSGI, Cloud and other new technologies, but hardly about new testing technologies or tools, new static code analysis approaches (e.g. SAT solving), development processes helping to maintain higher quality, how some nasty beast of legacy code was brought under test, ... (I did not attend many conferences and it propably looks different for conferences on agile topics, as unit testing and CI and such has a higer value there.) So why is code quality so unpopular/considered boring? EDIT: Thank your for your answers. Most of them concern unit testing (and has been discussed in a related question). But there are lots of other things that can be used to keep code quality high (see related question). Even if you are not able to use unit tests, you could use a daily build, add some static code analysis to your IDE or development process, try pair programming or enforce reviews of critical code.

    Read the article

  • surfaceview + glsurfaceview + framelayout

    - by pohtzeyun
    Hi, I'm new at this (java and opengl) so please bear with me if the answer to the question is simple. :) I'm trying to get a camera preview screen with the ability to display 3d objects simultaneously. Having gone through the samples at the api demos, I thought combining the code for the the examples at the api demo would suffice. But somehow its not working. The forces me to shut down upon startup and the error is mentioned as null pointer exception. Could someone share with me where did I go wrong and how to proceed from there. How I did the combination for the code is as shown below: myoverview.xml <?xml version="1.0" encoding="utf-8"?> <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="fill_parent"> <android.opengl.GLSurfaceView android:id="@+id/cubes" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="fill_parent"/> <SurfaceView android:id="@+id/camera" android:layout_width="fill_parent" android:layout_height="fill_parent"/> </FrameLayout> myoverview.java import android.app.Activity; import android.os.Bundle; import android.view.SurfaceView; import android.view.Window; public class MyOverView extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Hide the window title. requestWindowFeature(Window.FEATURE_NO_TITLE); // camera view as the background SurfaceView cameraView = (SurfaceView) findViewById(R.id.camera); cameraView = new CameraView(this); // visual of both cubes GLSurfaceView cubesView = (GLSurfaceView) findViewById(R.id.cubes); cubesView = new GLSurfaceView(this); cubesView.setRenderer(new CubeRenderer(false)); // set view setContentView(R.layout.myoverview); } } GLSurfaceView.java import android.content.Context; class GLSurfaceView extends android.opengl.GLSurfaceView { public GLSurfaceView(Context context) { super(context); } } NOTE : I didnt list the rest of the files as they are just copies of the api demos. The cameraView refers to the camerapreview.java example and the CubeRenderer refers to the CubeRenderer.java and Cube.java example. Any help would be appreciated as I've been stuck at this for a couple of days :p Thanks Sorry, didnt realise that the coding was out of place due to formatting mistakes. :p

    Read the article

  • NSString drawAtPoint Crash on the iPhone (NSString drawAtPoint)

    - by Kyle
    Hey. I have a very simple text output to buffer system which will crash randomly. It will be fine for DAYS, then sometimes it'll crash a few times in a few minutes. The callstack is almost exactly the same for other guys who use higher level controls: http://discussions.apple.com/thread.jspa?messageID=7949746 http://stackoverflow.com/questions/1978997/iphone-app-crashed-assertion-failed-function-evictglyphentryfromstrike-file It crashes at the line (below as well in drawTextToBuffer()): [nsString drawAtPoint:CGPointMake(0, 0) withFont:clFont]; I have the same call of "evict_glyph_entry_from_cache" with the abort calls immediately following it. Apparently it happens to other people. I can say that my NSString* is perfectly fine at the time of the crash. I can read the text from the debugger just fine. static CGColorSpaceRef curColorSpace; static CGContextRef myContext; static float w, h; static int iFontSize; static NSString* sFontName; static UIFont* clFont; static int iLineHeight; unsigned long* txb; /* 256x256x4 Buffer */ void selectFont(int iSize, NSString* sFont) { iFontSize = iSize; clFont = [UIFont fontWithName:sFont size:iFontSize]; iLineHeight = (int)(ceil([clFont capHeight])); } void initText() { w = 256; h = 256; txb = (unsigned long*)malloc_(w * h * 4); curColorSpace = CGColorSpaceCreateDeviceRGB(); myContext = CGBitmapContextCreate(txb, w, h, 8, w * 4, curColorSpace, kCGImageAlphaPremultipliedLast); selectFont(12, @"Helvetica"); } void drawTextToBuffer(NSString* nsString) { CGContextSaveGState(myContext); CGContextSetRGBFillColor(myContext, 1, 1, 1, 1); UIGraphicsPushContext(myContext); /* This line will crash. It crashes even with constant Strings.. At the time of the crash, the pointer to nsString is perfectly fine. The data looks fine! */ [nsString drawAtPoint:CGPointMake(0, 0) withFont:clFont]; UIGraphicsPopContext(); CGContextRestoreGState(myContext); } It will happen with other non-unicode supporting methods as well such as CGContextShowTextAtPoint(); the callstack is similar with that as well. Is this any kind of known issue with the iPhone? Or, perhaps, can something outside of this cause be causing an exception in this particular call (drawAtPoint)?

    Read the article

  • Hooking DirectX EndScene from an injected DLL

    - by Etan
    I want to detour EndScene from an arbitrary DirectX 9 application to create a small overlay. As an example, you could take the frame counter overlay of FRAPS, which is shown in games when activated. I know the following methods to do this: Creating a new d3d9.dll, which is then copied to the games path. Since the current folder is searched first, before going to system32 etc., my modified DLL gets loaded, executing my additional code. Downside: You have to put it there before you start the game. Same as the first method, but replacing the DLL in system32 directly. Downside: You cannot add game specific code. You cannot exclude applications where you don't want your DLL to be loaded. Getting the EndScene offset directly from the DLL using tools like IDA Pro 4.9 Free. Since the DLL gets loaded as is, you can just add this offset to the DLL starting address, when it is mapped to the game, to get the actual offset, and then hook it. Downside: The offset is not the same on every system. Hooking Direct3DCreate9 to get the D3D9, then hooking D3D9-CreateDevice to get the device pointer, and then hooking Device-EndScene through the virtual table. Downside: The DLL cannot be injected, when the process is already running. You have to start the process with the CREATE_SUSPENDED flag to hook the initial Direct3DCreate9. Creating a new Device in a new window, as soon as the DLL gets injected. Then, getting the EndScene offset from this device and hooking it, resulting in a hook for the device which is used by the game. Downside: as of some information I have read, creating a second device may interfere with the existing device, and it may bug with windowed vs. fullscreen mode etc. Same as the third method. However, you'll do a pattern scan to get EndScene. Downside: doesn't look that reliable. How can I hook EndScene from an injected DLL, which may be loaded when the game is already running, without having to deal with different d3d9.dll's on other systems, and with a method which is reliable? How does FRAPS for example perform it's DirectX hooks? The DLL should not apply to all games, just to specific processes where I inject it via CreateRemoteThread.

    Read the article

  • jQuery UI portlets - toggle portlets to save to a cookie (half way there!)

    - by Gareth
    Hi, I'm a bit of a jQuery n00b so please excuse me if this seems like a stupid question. I am creating a site using the jQuery UI more specifically the sortable portlets. I have been able store whether or not a portlet is has been open or closed to a cookie. This is done using the following code. The slider ID is currently where the controls are stored to turn each portlet on and off. var cookie = $.cookie("hidden"); var hidden = cookie ? cookie.split("|").getUnique() : []; var cookieExpires = 7; // cookie expires in 7 days, or set this as a date object to specify a date // Remember content that was hidden $.each( hidden, function(){ var pid = this; //parseInt(this,10); $('#' + pid).hide(); $("#slider div[name='" + pid + "']").addClass('add'); }) // Add Click functionality $("#slider div").click(function(){ $(this).toggleClass('add'); var el = $("div#" + $(this).attr('name')); el.toggle(); updateCookie(el); }); $('a.toggle').click(function(){ $(this).parents(".portlet").hide(); // *** Below line just needs to select the correct 'id' and insert as selector i.e ('#slider div#block-1') and then update cookie! *** $('#slider div').addClass('add'); }); // Update the Cookie function updateCookie(el){ var indx = el.attr('id'); var tmp = hidden.getUnique(); if (el.is(':hidden')) { // add index of widget to hidden list tmp.push(indx); } else { // remove element id from the list tmp.splice( tmp.indexOf(indx) , 1); } hidden = tmp.getUnique(); $.cookie("hidden", hidden.join('|'), { expires: cookieExpires } ); } }) // Return a unique array. Array.prototype.getUnique = function() { var o = new Object(); var i, e; for (i = 0; e = this[i]; i++) {o[e] = 1}; var a = new Array(); for (e in o) {a.push (e)}; return a; } What I would like to do is also add a [x] into the corner of each portlet to give the user another way of hiding it but I'm unable to currently get this to store within the Cookie using the code above. Can anyone give me a pointer of how I would do this? Thanks in advance! Gareth

    Read the article

  • How to use c++0x thread in Android NDK?

    - by m-ric
    I am trying to compile this simple program with android-ndk-r8b: jni/hello_jni.cpp #include <iostream> #include <thread> void hello() { std::cout << "Hi i'm a thread!!!" << std::endl; } int main() { std::thread th(hello); th.join(); return 0; } jni/Application.mk APP_OPTIM := release APP_MODULES := hello_thread APP_STL := gnustl_static jni/Android.mk LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) LOCAL_CPPFLAGS += -std=c++0x -frtti LOCAL_MODULE := hello_thread LOCAL_LDLIBS := -L$(SYSROOT)/usr/lib -pthread LOCAL_SRC_FILES := hello_thread.cpp include $(BUILD_EXECUTABLE) ndk-build returns me an error arguin that 'thread' is not a member of 'std'. I issued ndk-build -n to get the compilation command and issued it alone in my shell: /home/evigier/android-ndk-r8b/toolchains/arm-linux-androideabi-4.6/prebuilt/linux-x86/bin/arm-linux-androideabi-g++ -MMD -MP -MF /home/evigier/eclipse_workspace/hello_thread/obj/local/armeabi/objs/hello_thread/hello_thread.o.d -fpic -ffunction-sections -funwind-tables -fstack-protector -D__ARM_ARCH_5__ -D__ARM_ARCH_5T__ -D__ARM_ARCH_5E__ -D__ARM_ARCH_5TE__ -march=armv5te -mtune=xscale -msoft-float -fno-exceptions -fno-rtti -mthumb -Os -fomit-frame-pointer -fno-strict-aliasing -finline-limit=64 -I/home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include -I/home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/libs/armeabi/include -I/home/evigier/eclipse_workspace/hello_thread/jni -DANDROID -Wa,--noexecstack -std=c++0x -frtti -O2 -DNDEBUG -g -I/home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include -c /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp -o /home/evigier/eclipse_workspace/hello_thread/obj/local/armeabi/objs/hello_thread/hello_thread.o Compile++ thumb : hello_thread <= hello_thread.cpp In file included from /home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include/stdio.h:55:0, from /home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include/wchar.h:33, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/cwchar:46, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/bits/postypes.h:42, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/iosfwd:42, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/ios:39, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/ostream:40, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/iostream:40, from jni/hello_thread.cpp:4: /home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include/sys/types.h:124:9: error: 'uint64_t' does not name a type /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp: In function 'int main()': /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp:14:5: error: 'thread' is not a member of 'std' /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp:14:17: error: expected ';' before 'th' /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp:15:5: error: 'th' was not declared in this scope I read a lot of threads/questions about POSIX threads and C++ threads, but still cannot find my answer. My arm-linux-androideabi/include/c++/4.6/thread file defines class thread in std only: #if defined(_GLIBCXX_HAS_GTHREADS) && defined(_GLIBCXX_USE_C99_STDINT_TR1) They don't seem to be defined in my sdk (c++config.h). But how can I possibly turn them on safely? Do i need to compile my own toolchain to use (non-p)threads? My host computer is : Linux evigier-ThinkPad-X220 3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 20:45:39 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • How does mysql define DISTINCT() in reference documentation

    - by goran
    EDIT: This question is about finding definitive reference to MySQL syntax on SELECT modifying keywords and functions. /EDIT AFAIK SQL defines two uses of DISTINCT keywords - SELECT DISTINCT field... and SELECT COUNT(DISTINCT field) ... However in one of web applications that I administer I've noticed performance issues on queries like SELECT DISTINCT(field1), field2, field3 ... DISTINCT() on a single column makes no sense and I am almost sure it is interpreted as SELECT DISTINCT field1, field2, field3 ... but how can I prove this? I've searched mysql site for a reference on this particular syntax, but could not find any. Does anyone have a link to definition of DISTINCT() in mysql or knows about other authoritative source on this? Best EDIT After asking the same question on mysql forums I learned that while parsing the SQL mysql does not care about whitespace between functions and column names (but I am still missing a reference). As it seems you can have whitespace between functions and the parenthesis SELECT LEFT (field1,1), field2... and get mysql to understand it as SELECT LEFT(field,1) Similarly SELECT DISTINCT(field1), field2... seems to get decomposed to SELECT DISTINCT (field1), field2... and then DISTINCT is taken not as some undefined (or undocumented) function, but as SELECT modifying keyword and the parenthesis around field1 are evaluated as if they were part of field expression. It would be great if someone would have a pointer to documentation where it is stated that the whitespace between functions and parenthesis is not significant or to provide links to apropriate MySQL forums, mailing lists where I could raise a question to put this into reference. EDIT I have found a reference to server option IGNORE SPACE. It states that "The IGNORE SPACE SQL mode can be used to modify how the parser treats function names that are whitespace-sensitive", later on it states that recent versions of mysql have reduced this number from 200 to 30. One of the remaining 30 is COUNT for example. With IGNORE SPACE enabled both SELECT COUNT(*) FROM mytable; SELECT COUNT (*) FROM mytable; are legal. So if this is an exception, I am left to conclude that normally functions ignore space by default. If functions ignore space by default then if the context is ambiguous, such as for the first function on a first item of the select expression, then they are not distinguishable from keywords and the error can not be thrown and MySQL must accept them as keywords. Still, my conclusions feel like they have lot of assumptions, I would still be grateful and accept any pointers to see where to follow up on this.

    Read the article

  • Can't build full html table in QTextEdit with std::for_each...

    - by mosg
    Hi. Here is my code function: void ReportHistory::update(void) { ui.output->clear(); ui.output->setCurrentFont(QFont("Arial", 8, QFont::Normal)); QString title = "My Title"; QStringList headers = QString("Header1,Header2,Header3,Header4,Header5,Header6").split(","); QString html = QString( "<html>" \ "<head>" \ "<meta Content=\"Text/html; charset=Windows-1251\">" \ "<title>%1</title>" \ "</head>" \ "<body bgcolor=#ffffff link=#5000A0>" \ "<p>%1</p>" \ "<table border=1 cellspacing=0 cellpadding=2>" \ "<tr bgcolor=#f0f0f0>" ).arg(title); foreach (QString header, headers) { html.append(QString("<th>%1</th>").arg(header)); } html.append("</tr>"); struct Fill { QString html_; Analytics::NavHistory::History::value_type prev_; Fill(QString html) : html_(html) {} void operator ()(const Analytics::NavHistory::History::value_type& entry) { QStringList line = (QString( "%1|%2|%3|%4|%5|%6" ).arg(value1, 15) .arg(value2 ? ' ' : 'C', 8) .arg(value3, 15) .arg(value4, 15, 'f', 4) .arg(value5, 15) .arg(value6, 15, 'f', 4)).split("|"); html_.append("<tr>"); foreach (QString item, line) { html_.append("<td bkcolor=0>%1</td>").arg(item); } html_.append("</tr>"); prev_ = entry; } }; std::for_each(history_->data().begin(), history_->data().end(), Fill(html)); html.append( "</table>" \ "</body>" \ "</html>"); ui.output->setHtml(html); } Where: ui.output is a pointer to QTextEdit. Question: the ui.output just show me the headers, and not the full table, what is wrong? Thanks.

    Read the article

  • jQuery show/hide menu problem

    - by jerrygarciuh
    Hi folks, I am encountering an odd behavior using jQuery to show/hide a menu. I have an absolutely positioned div which contains an "activator " div (relatively positioned) which I want to reveal a menu on moseover. Menu div is contained by the activator div and is also relatively positioned. I was working on assumption that since it would be contained by the activator that rolloff would not be triggered when the mouse travels over into the reveled menu. When you roll onto the revealed menu however show/hide starts pulsing and does so a second or so even after the mouse clears the area. CSS looks like this #myAbsolutePos { position:absolute; height:335px; width:213px; top:508px; left:0; z-index:2; } #activator { position:relative; height:35px; margin-top:95px; text-align:center; width:inherit; cursor:pointer; } #menu { position:relative; height:255px; width:243px; top:-45px; left:190px; padding:20px 25px 20px 25px; } #menuContents { width:190px; } jQuery funcs: $('#activator').mouseover(function () { $('#menu').show('slow'); }); $('#activator').mouseout(function () { $('#menu').hide('slow'); }); HTML: <div id="myAbsolutePos"> <div id="activator"> // content <div id="menu" style="display:none"> <div id="menuContents"> // content </div> </div> </div> </div> To see problem in action roll over the current weather location (Thunder Horse) in the lower left here: http://www.karlsenner.dreamhosters.com/index.php Any advice is most appreciated! JG

    Read the article

  • Unity framework - creating & disposing Entity Framework datacontexts at the appropriate time

    - by TobyEvans
    Hi there, With some kindly help from StackOverflow, I've got Unity Framework to create my chained dependencies, including an Entity Framework datacontext object: using (IUnityContainer container = new UnityContainer()) { container.RegisterType<IMeterView, Meter>(); container.RegisterType<IUnitOfWork, CommunergySQLiteEntities>(new ContainerControlledLifetimeManager()); container.RegisterType<IRepositoryFactory, SQLiteRepositoryFactory>(); container.RegisterType<IRepositoryFactory, WCFRepositoryFactory>("Uploader"); container.Configure<InjectedMembers>() .ConfigureInjectionFor<CommunergySQLiteEntities>( new InjectionConstructor(connectionString)); MeterPresenter meterPresenter = container.Resolve<MeterPresenter>(); this works really well in creating my Presenter object and displaying the related view, I'm really pleased. However, the problem I'm running into now is over the timing of the creation and disposal of the Entity Framework object (and I suspect this will go for any IDisposable object). Using Unity like this, the SQL EF object "CommunergySQLiteEntities" is created straight away, as I've added it to the constructor of the MeterPresenter public MeterPresenter(IMeterView view, IUnitOfWork unitOfWork, IRepositoryFactory cacheRepository) { this.mView = view; this.unitOfWork = unitOfWork; this.cacheRepository = cacheRepository; this.Initialize(); } I felt a bit uneasy about this at the time, as I don't want to be holding open a database connection, but I couldn't see any other way using the Unity dependency injection. Sure enough, when I actually try to use the datacontext, I get this error: ((System.Data.Objects.ObjectContext)(unitOfWork)).Connection '((System.Data.Objects.ObjectContext)(unitOfWork)).Connection' threw an exception of type 'System.ObjectDisposedException' System.Data.Common.DbConnection {System.ObjectDisposedException} My understanding of the principle of IoC is that you set up all your dependencies at the top, resolve your object and away you go. However, in this case, some of the child objects, eg the datacontext, don't need to be initialised at the time the parent Presenter object is created (as you would by passing them in the constructor), but the Presenter does need to know about what type to use for IUnitOfWork when it wants to talk to the database. Ideally, I want something like this inside my resolved Presenter: using(IUnitOfWork unitOfWork = new NewInstanceInjectedUnitOfWorkType()) { //do unitOfWork stuff } so the Presenter knows what IUnitOfWork implementation to use to create and dispose of straight away, preferably from the original RegisterType call. Do I have to put another Unity container inside my Presenter, at the risk of creating a new dependency? This is probably really obvious to a IoC guru, but I'd really appreciate a pointer in the right direction thanks Toby

    Read the article

  • C++ How to deep copy a struct with unknown datatype?

    - by Ewald Peters
    hi, i have a "data provider" which stores its output in a struct of a certain type, for instance struct DATA_TYPE1{ std::string data_string; }; then this struct has to be casted into a general datatype, i thought about void * or char *, because the "intermediate" object that copies and stores it in its binary tree should be able to store many different types of such struct data. struct BINARY_TREE_ENTRY{ void * DATA; struct BINARY_TREE_ENTRY * next; }; this void * is then later taken by another object that casts the void * back into the (struct DATA_TYPE1 *) to get the original data. so the sender and the receiver know about the datatype DATA_TYPE1 but not the copying object inbetween. but how can the intermidiate object deep copy the contents of the different structs, when it doesn't know the datatype, only void * and it has no method to copy the real contents; dynamic_cast doesn't work for void *; the "intermediate" object should do something like: void store_data(void * CASTED_DATA_STRUCT){ void * DATA_COPY = create_a_deepcopy_of(CASTED_DATA_STRUCT); push_into_bintree(DATA_COPY); } a simple solution would be that the sending object doesn't delete the sent data struct, til the receiving object got it, but the sending objects are dynamically created and deleted, before the receiver got the data from the intermediate object, for asynchronous communication, therefore i want to copy it. instead of converting it to void * i also tried converting to a superclass pointer of which the intermediate copying object knows about, and which is inherited by all the different datatypes of the structs: struct DATA_BASE_OBJECT{ public: DATA_BASE_OBJECT(){} DATA_BASE_OBJECT(DATA_BASE_OBJECT * old_ptr){ std::cout << "this should be automatically overridden!" << std::endl; } virtual ~DATA_BASE_OBJECT(){} }; struct DATA_TYPE1 : public DATA_BASE_OBJECT { public: string str; DATA_TYPE1(){} ~DATA_TYPE1(){} DATA_TYPE1(DATA_TYPE1 * old_ptr){ str = old_ptr->str; } }; and the corresponding binary tree entry would then be: struct BINARY_TREE_ENTRY{ struct DATA_BASE_OBJECT * DATA; struct BINARY_TREE_ENTRY * next; }; and to then copy the unknown datatype, i tried in the class that just gets the unknown datatype as a struct DATA_BASE_OBJECT * (before it was the void *): void * copy_data(DATA_BASE_OBJECT * data_that_i_get_in_the_sub_struct){ struct DATA_BASE_OBJECT * copy_sub = new DATA_BASE_OBJECT(data_that_i_get_in_the_sub_struct); push_into_bintree(copy_sub); } i then added a copy constructor to the DATA_BASE_OBJECT, but if the struct DATA_TYPE1 is first casted to a DATA_BASE_OBJECT and then copied, the included sub object DATA_TYPE1 is not also copied. i then thought what about finding out the size of the actual object to copy and then just memcopy it, but the bytes are not stored in one row and how do i find out the real size in memory of the struct DATA_TYPE1 which holds a std::string? Which other c++ methods are available to deepcopy an unknown datatype (and to maybe get the datatype information somehow else during runtime) thanks Ewald

    Read the article

  • jquery ui tabs close button beneath the text

    - by Pradyut Bhattacharya
    Hi I m using jquery ui tabs and i m using them with the function of dynamically closing them. the example page here where clicking on the link 'add tab' leads to adding of tabs in the tabs panel... now in firefox the close buttons are displayed beneath the text of the tab which is leading to garbled text in the tab panel or the body of the tabs like other browsers how can i display it in same line the css i m using is .ui-tabs { padding: .20em; zoom: 1; } .ui-tabs .ui-tabs-nav { list-style: none; position: relative; padding: .2em .2em 0; height:27px; } .ui-tabs .ui-tabs-nav li { position: relative; float: left; border-bottom-width: 0 !important; margin: 0 .2em -1px 0; padding: 0; font-size:63.5%; } .ui-tabs .ui-tabs-nav li a { float: left; text-decoration: none; padding: .5em 1em; } .ui-tabs .ui-tabs-nav li.ui-tabs-selected { padding-bottom: 1px; border-bottom-width: 0; } .ui-tabs .ui-tabs-nav li.ui-tabs-selected a, .ui-tabs .ui-tabs-nav li.ui-state-disabled a, .ui-tabs .ui-tabs-nav li.ui-state-processing a { cursor: text; } .ui-tabs .ui-tabs-nav li a, .ui-tabs.ui-tabs-collapsible .ui-tabs-nav li.ui-tabs-selected a { cursor: pointer; font: 62.5%; } .ui-tabs .ui-tabs-panel { padding: 1em 1.4em; display: block; border-width: 0; background:black; color:white; font-size: 12px; } .ui-tabs .ui-tabs-hide { display: none !important; font: 62.5%; } #tabs .ui-tabs-nav li a:hover { float: left; text-decoration: none; padding: .5em 1em; background-color: #868472; } #tabs-profile .ui-tabs-nav li { position: relative; float: left; border-bottom-width: 0 !important; margin: 0 .2em -1px 0; padding: 0; font-size:75%; } Please help thanks Pradyut India

    Read the article

  • TVirtualStringTree - resetting non-visual nodes and memory consumption

    - by Remy Lebeau - TeamB
    I have an app that loads records from a binary log file and displays them in a virtual TListView. There are potentially millions of records in a file, and the display can be filtered by the user, so I do not load all of the records in memory at one time, and the ListView item indexes are not a 1-to-1 relation with the file record offsets (List item 1 may be file record 100, for instance). I use the ListView's OnDataHint event to load records for just the items the ListView is actually interested in. As the user scrolls around, the range specified by OnDataHint changes, allowing me to free records that are not in the new range, and allocate new records as needed. This works fine, speed is tolerable, and the memory footprint is very low. I am currently evaluating TVirtualStringTree as a replacement for the TListView, mainly because I want to add the ability to expand/collapse records that span multiple lines (I can fudge it with the TListView by incrementing/decrementing the item count dynamically, but this is not as straight forward as using a real tree). For the most part, I have been able to port the TListView logic and have everything work as I need. I notice that TVirtualStringTree's virtual paradigm is vastly different, though. It does not have the same kind of OnDataHint functionality that TListView does (I can use the OnScroll event to fake it, which allows my memory buffer logic to continue working), and I can use the OnInitializeNode event to associate nodes with records that are allocated. However, once a tree node is initialized, it sees that it remains initialized for the lifetime of the tree. That is not good for me. As the user scrolls around and I remove records from memory, I need to reset those non-visual nodes without removing them from the tree completely, or losing their expand/collapse states. When the user scrolls them back into view, I can re-allocate the records and re-initialize the nodes. Basically, I want to make TVirtualStringTree act as much like TListView as possible, as far as its virtualization is concerned. I have seen that TVirtualStringTree has a ResetNode() method, but I encounter various errors whenever I try to use it. I must be using it wrong. I also thought of just storing a data pointer inside each node to my record buffers, and I allocate and free memory, update those pointers accordingly. The end effect does not work so well, either. Worse, my largest test log file has ~5 million records in it. If I initialize the TVirtualStringTree with that many nodes at one time (when the log display is unfiltered), the tree's internal overhead for its nodes takes up a whopping 260MB of memory (without any records being allocated yet). Whereas with the TListView, loading the same log file and all the memory logic behind it, I can get away with using just a few MBs. Any ideas?

    Read the article

  • Problem with bootstrap loader and kernel

    - by dboarman-FissureStudios
    We are working on a project to learn how to write a kernel and learn the ins and outs. We have a bootstrap loader written and it appears to work. However we are having a problem with the kernel loading. I'll start with the first part: bootloader.asm: [BITS 16] [ORG 0x0000] ; ; all the stuff in between ; ; the bottom of the bootstrap loader datasector dw 0x0000 cluster dw 0x0000 ImageName db "KERNEL SYS" msgLoading db 0x0D, 0x0A, "Loading Kernel Shell", 0x0D, 0x0A, 0x00 msgCRLF db 0x0D, 0x0A, 0x00 msgProgress db ".", 0x00 msgFailure db 0x0D, 0x0A, "ERROR : Press key to reboot", 0x00 TIMES 510-($-$$) DB 0 DW 0xAA55 ;************************************************************************* The bootloader.asm is too long for the editor without causing it to chug and choke. In addition, the bootloader and kernel do work within bochs as we do get the message "Welcome to our OS". Anyway, the following is what we have for a kernel at this point. kernel.asm: [BITS 16] [ORG 0x0000] [SEGMENT .text] ; code segment mov ax, 0x0100 ; location where kernel is loaded mov ds, ax mov es, ax cli mov ss, ax ; stack segment mov sp, 0xFFFF ; stack pointer at 64k limit sti mov si, strWelcomeMsg ; load message call _disp_str mov ah, 0x00 int 0x16 ; interrupt: await keypress int 0x19 ; interrupt: reboot _disp_str: lodsb ; load next character or al, al ; test for NUL character jz .DONE mov ah, 0x0E ; BIOS teletype mov bh, 0x00 ; display page 0 mov bl, 0x07 ; text attribute int 0x10 ; interrupt: invoke BIOS jmp _disp_str .DONE: ret [SEGMENT .data] ; initialized data segment strWelcomeMsg db "Welcome to our OS", 0x00 [SEGMENT .bss] ; uninitialized data segment Using nasm 2.06rc2 I compile as such: nasm bootloader.asm -o bootloader.bin -f bin nasm kernel.asm -o kernel.sys -f bin We write bootloader.bin to the floppy as such: dd if=bootloader.bin bs=512 count=1 of/dev/fd0 We write kernel.sys to the floppy as such: cp kernel.sys /dev/fd0 As I stated, this works in bochs. But booting from the floppy we get output like so: Loading Kernel Shell ........... ERROR : Press key to reboot Other specifics: OpenSUSE 11.2, GNOME desktop, AMD x64 Any other information I may have missed, feel free to ask. I tried to get everything in here that would be needed. If I need to, I can find a way to get the entire bootloader.asm posted somewhere. We are not really interested in using GRUB either for several reasons. This could change, but we want to see this boot successful before we really consider GRUB.

    Read the article

  • directshow Renderstream fails with grayscale bitmaps

    - by Roey
    Hi all. I'm trying to create a directshow graph to playback a video composed of 8bit grayscale bitmaps. (using directshow.net.) I'm using a source filter and the vmr9 renderer. The source filter's output pin is defined using the following code : bmi.Size = Marshal.SizeOf(typeof(BitmapInfoHeader)); bmi.Width = width; bmi.Height = height;; bmi.Planes = 1; bmi.BitCount = (short)bitcount; bmi.Compression = 0; bmi.ImageSize = Math.Abs(bmi.Height) * bmi.Width * bmi.BitCount / 8; bmi.ClrUsed = bmi.BitCount <= 8 ? 256 : 0; bmi.ClrImportant = 0; //bmi.XPelsPerMeter = 0; //bmi.YPelsPerMeter = 0; bool isGrayScale = bmi.BitCount <= 8 ? true : false; int formatSize = Marshal.SizeOf(typeof(BitmapInfoHeader)); if (isGrayScale == true) { MessageWriter.Log.WriteTrace("Playback is grayscale."); /// Color table holds an array of 256 RGBQAD values /// Those are relevant only for grayscale bitmaps formatSize += Marshal.SizeOf(typeof(RGBQUAD)) * bmi.ClrUsed; } IntPtr ptr = Marshal.AllocHGlobal(formatSize); Marshal.StructureToPtr(bmi, ptr, false); if (isGrayScale == true) { /// Adjust the pointer to the beginning of the /// ColorTable address and create the grayscale color table IntPtr ptrNext = (IntPtr)((int)ptr + Marshal.SizeOf(typeof(BitmapInfoHeader))); for (int i = 0; i < bmi.ClrUsed; i++) { RGBQUAD rgbCell = new RGBQUAD(); rgbCell.rgbBlue = rgbCell.rgbGreen = rgbCell.rgbRed = (byte)i; rgbCell.rgbReserved = 0; Marshal.StructureToPtr(rgbCell, ptrNext, false); ptrNext = (IntPtr)((int)ptrNext + Marshal.SizeOf(typeof(RGBQUAD))); } } This causes Renderstream to return "No combination of intermediate filters could be found to make the connection." Please help! Thanks.

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >