Search Results

Search found 578 results on 24 pages for 'august'.

Page 18/24 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • WebLogic Partner Community Newsletter May 2014

    - by JuergenKress
    Dear WebLogic Partner Community member, Registration for the Fusion Middleware Summer Camps 2014 is open – Register asap for one of our bootcamps August 4th – 8th 2014 in Lisbon. Please read details and pre-requisitions careful before you register. We expect that like in the past, the conference will be booked out soon! Thanks to you our WebLogic Specialized Partners Oracle is #1 for Worldwide Market-Share Total Software Revenue in the Application Platform Market Segment for 2013. Want to know why, get the new recipes for Oracle WebLogic 12.1.2. Looking for the right server to run WebLogic – try WebLogic on Oracle Database Appliance 2.9. Want to install WebLogic - Play around with WebLogic Maven Plug-In. Thanks for sharing all the additional WebLogic articles within the community: How to use NodeManager to control WebLogic Servers & Retrieving WebLogic Server Name and Port in ADF Application & Glassfish to WebLogic Migration & Advanced GPIO & Building Robots with Java Embedded & Quick & Dirty How-to Guide: Install GlassFish 4 on Raspberry Pi & New Release: Java Micro Edition (ME) 8. In our Development tool section Frank published Development - Performance and Tuning - Overview in the latest ADF Architecture TV channel. Many of our clients run forms applications, make sure you run it on WebLogic. Thanks for sharing all the additional development tool articles within the community: Using Oracle WebLogic 12c with NetBeans IDE & Consuming SOAP Service & Check Box Support in ADF Query & New release of the ADF EMG Audit Rules & Working with the Array Data Type in a Table & ADF client-side architecture - Select All & Book Review: NetBeans Platform for Beginners See you in Lisbon! To read the complete newsletter please visit http://tinyurl.com/WebLogicNewsMay2014 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Business Intelligence goes Big Data

    - by Alliances & Channels Redaktion
    Big Data stellt die nächste große Herausforderung für die IT-Branche dar: Massen von Daten aus immer mehr Quellen – aus sozialen Netzwerken, Telekommunikations- und Weblogs, RFID-Lesern etc. – müssen logisch verknüpft, in Echtzeit integriert und verarbeitet werden. Doch wie sieht es mit der praktischen Umsetzung aus? Eine europaweite Studie von Steria Mummert Consulting zeigt: Lediglich 28 % der Unternehmen haben bereits heute eine übergreifende, abgestimmte Business-Intelligence-Strategie implementiert. Vorherrschend sind BI-Insellösungen, die schon jetzt an den Grenzen ihrer Kapazität arbeiten. Daten werden also bisher nur eingeschränkt als wertschöpfende Ressource genutzt! Das Ergebnis der Studie klingt erschreckend, doch Unternehmen können es zu Ihrem Vorteil nutzen: Wer jetzt das Thema Big Data anpackt, kann sich einen gewinnbringenden Vorsprung vor dem Wettbewerb sichern. Wie sieht die Analyse-Umgebung der Zukunft aus? Wie und wo kann Big Data für den Geschäftserfolg genutzt werden? Antworten darauf liefert die Kunden-Event Reihe von Oracle und dem Oracle Platinum Partner Steria Mummert Consulting: Hier werden Strategien entwickelt, wie Unternehmen mit Information Discovery ihr BI-Potenzial auf dem Weg zur Big Data Schritt für Schritt ausbauen können. Highlights aus München Durchweg positives Feedback haben wir aus München, der ersten Station der Eventreihe am 23.7., erhalten: Nicht nur die tolle Location, das "La Villa" im Bamberger Haus, überzeugte. Die 31 Teilnehmerinnen und Teilnehmer konnten auch inhaltlich eine Menge mitnehmen – unter anderem einen konkreten Vorschlag für ihre eigene Roadmap in Richtung Big Data. Die Ausgangsfrage des Tages lautete – einfach und umfassend zugleich: Wie können wir den Überblick in einer komplexen Welt behalten? Den Status quo in Europa für Business Intelligence präsentierte Steria Mummert Consulting entlang der Europäischen biMA®-Studie 2012/13. Anhand von Anwendungsbeispielen aus ihrer Praxis präsentierten die geladenen Experten von Oracle und Steria Mummert Consulting verschiedene Lösungsansätze. Eine sehr anschauliche Demo zu Endeca zeigte beispielsweise, wie einfach und flexibel ein Dashboard sein kann: Hier gibt es keine vordefinierten Reports, stattdessen können Entscheider die Filter einfach per Drag & Drop verändern und bekommen so einen individuell sturkturierten Überblick über ihre Daten. Einen Ausblick bot die Session zu Oracle Business Analytics für mobile Anwendungen und Real-Time Decisions. Fazit: eine gelungene Mischung aus Überblicks-Informationen und ganz konkreten Ideen für die spezifischen Anwendungsbereiche der Kunden. Die Eventreihe „BI goes Big Data“ macht im August in Hamburg und Frankfurt Station. Die kostenfreie Veranstaltung findet zusammen mit Steria Mummert Consulting statt und richtet sich an Endkunden. In Hamburg am 14.8.2013 – zur AnmeldungIn Frankfurt a.M. am 20.8.2013 – zur Anmeldung

    Read the article

  • Business Intelligence goes Big Data

    - by Alliances & Channels Redaktion
    Big Data stellt die nächste große Herausforderung für die IT-Branche dar: Massen von Daten aus immer mehr Quellen – aus sozialen Netzwerken, Telekommunikations- und Weblogs, RFID-Lesern etc. – müssen logisch verknüpft, in Echtzeit integriert und verarbeitet werden. Doch wie sieht es mit der praktischen Umsetzung aus? Eine europaweite Studie von Steria Mummert Consulting zeigt: Lediglich 28 % der Unternehmen haben bereits heute eine übergreifende, abgestimmte Business-Intelligence-Strategie implementiert. Vorherrschend sind BI-Insellösungen, die schon jetzt an den Grenzen ihrer Kapazität arbeiten. Daten werden also bisher nur eingeschränkt als wertschöpfende Ressource genutzt! Das Ergebnis der Studie klingt erschreckend, doch Unternehmen können es zu Ihrem Vorteil nutzen: Wer jetzt das Thema Big Data anpackt, kann sich einen gewinnbringenden Vorsprung vor dem Wettbewerb sichern. Wie sieht die Analyse-Umgebung der Zukunft aus? Wie und wo kann Big Data für den Geschäftserfolg genutzt werden? Antworten darauf liefert die Kunden-Event Reihe von Oracle und dem Oracle Platinum Partner Steria Mummert Consulting: Hier werden Strategien entwickelt, wie Unternehmen mit Information Discovery ihr BI-Potenzial auf dem Weg zur Big Data Schritt für Schritt ausbauen können. Highlights aus München Durchweg positives Feedback haben wir aus München, der ersten Station der Eventreihe am 23.7., erhalten: Nicht nur die tolle Location, das "La Villa" im Bamberger Haus, überzeugte. Die 31 Teilnehmerinnen und Teilnehmer konnten auch inhaltlich eine Menge mitnehmen – unter anderem einen konkreten Vorschlag für ihre eigene Roadmap in Richtung Big Data. Die Ausgangsfrage des Tages lautete – einfach und umfassend zugleich: Wie können wir den Überblick in einer komplexen Welt behalten? Den Status quo in Europa für Business Intelligence präsentierte Steria Mummert Consulting entlang der Europäischen biMA®-Studie 2012/13. Anhand von Anwendungsbeispielen aus ihrer Praxis präsentierten die geladenen Experten von Oracle und Steria Mummert Consulting verschiedene Lösungsansätze. Eine sehr anschauliche Demo zu Endeca zeigte beispielsweise, wie einfach und flexibel ein Dashboard sein kann: Hier gibt es keine vordefinierten Reports, stattdessen können Entscheider die Filter einfach per Drag & Drop verändern und bekommen so einen individuell sturkturierten Überblick über ihre Daten. Einen Ausblick bot die Session zu Oracle Business Analytics für mobile Anwendungen und Real-Time Decisions. Fazit: eine gelungene Mischung aus Überblicks-Informationen und ganz konkreten Ideen für die spezifischen Anwendungsbereiche der Kunden. Die Eventreihe „BI goes Big Data“ macht im August in Hamburg und Frankfurt Station. Die kostenfreie Veranstaltung findet zusammen mit Steria Mummert Consulting statt und richtet sich an Endkunden. In Hamburg am 14.8.2013 – zur AnmeldungIn Frankfurt a.M. am 20.8.2013 – zur Anmeldung

    Read the article

  • Business Analytics Oracle Partner Advisory Council 2013

    - by Mike.Hallett(at)Oracle-BI&EPM
    72 544x376 WHEN: Friday the 20th of September 2013 (just before OpenWorld) Register by: 1st of August 2013 WHERE: Sofitel San Francisco Bay 223 Twin Dolphin Drive, 94065 REDWOOD CITY, CA, USA Don’t miss this once a year opportunity to meet and drive a closer engagement with Oracle’s global Product Management Team: Oracle global Product Management will host this workshop. Target group: ACE Directors, Lead Consultants and key architects from our partners Topics: Product Development Roadmap, Partner project experience, your feedback, Q&A session For any inquiries please contact [email protected] As part of registering for the event, please note that you will need to complete the BI / EPM PAC-survey. Thank you for taking the time to provide this, your valuable input. The PAC (one for BI, and a separate track for Hyperion) is free of charge, but is only for OPN Specialised member partners, and is subject to availability. (Please do not attend unless you have received a confirmation from Oracle to do so.) Registration Link Coming Soon We do hope you will be able to join us - and I look forward to welcoming you in San Francisco. Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • JSR 355 Final Release, and moves JCP to version 2.9

    - by heathervc
    JSR 355, JCP EC Merge, passed the JCP EC Final Approval Ballot on 13 August 2012, with 14 Yes votes, 1 abstain (1 member did not vote) on the SE/EE EC, and 12 yes votes (2 members were not eligible to vote) on the ME EC.  JSR 355 posted a Final Release this week, moving the JCP program version to JCP 2.9.  The transition to a merged EC will happen after the 2012 EC Elections, as defined in the Appendix B of the JCP (pasted below), and the EC will operate under the new EC Standing Rules. In the previous version (2.8) of this Process Document there were two separate Executive Committees, one for Java ME and one for Java SE and Java EE combined. The single Executive Committee described in this version of the Process Document will be implemented through the following process: The 2012 annual elections will be held as defined in JCP 2.8, but candidates will be informed that if they are elected their term will be for only a single year, since all candidates must stand for re-election in 2013. Immediately after the 2012 election the two ECs will be merged. Oracle and IBM's second seats will be eliminated, resulting in a single EC with 30 members. All subsequent JSR ballots (even for in-progress JSRs) will then be voted on by the merged EC. For the 2013 annual elections three Ratified and two Elected Seats will be eliminated, thereby reducing the EC to 25 members. All 25 seats will be up for re-election in 2013. Members elected in 2013 will be ranked to determine whether their initial term will be one or two years. The 50% of Ratified and 50% of Elected members who receive the most votes will serve an initial two-year term, while all others will serve an initial one year term. All members elected in 2014 and subsequently will serve a two-year term. For clarity, note that the provisions specified in this version of the Process Document regarding a merged EC will apply to subsequent ballots on all existing JSRs, whether or not the Spec Leads of those JSRs chose to adopt this version of the Process Document in its entirety. <end of Appendix> Also of note:  the materials and minutes from the July EC meeting and the June EC Meeting are now available--following the July EC Meeting, Samsung and SK Telecom lost their EC seats. The June EC meeting also had a public portion--the audio from the public portion of the EC meeting are now posted online.  For Spec Leads there is also the recording of the EG Nominations call.

    Read the article

  • How should I land my next consulting gig? [closed]

    - by MrOodles
    For the last couple of years, I've been working on speculative projects, expanding my skillset with side work, and paying the bills while having a blast consulting for startups. However, for a number of personal reasons, I need to spend the next 6-9 months maximizing my cash income. I want to put this into effect starting in early August. So that means I have one month to put the necessary client list/portfolio/resume together to start making this happen. As a programmer, I am very proficient in building Django web apps. I can write the necessary SQL, python, javascript, and css to build every part of a Django app, and then do the system administration necessary to deploy on AWS using EC2. I can also rig up a CDN to work seemlessly with the app using S3 and Cloudfront. I have built GIS applications using GeoDjango and PostGIS, and I have constructed social video apps by implementing Encoding.com as a service to prepare raw video files for consumption on the web. I am also moderately proficient in programming PHP, Java, and C#. I have built web apps in PHP, and desktop apps in Java and C#. I have dabbled with Android applications and iPhone apps, but nothing I would show off. I have experience doing SEO, social media marketing, and content marketing. Many of my clients have needed their apps promoted after they were built, and I was always happy to oblige when I could. I have also worked with biometrics technology including fingerprints for government contractors. This was as much a business analyst role as it was a programming gig, as I had to help answer RFPs, make checklists, and work around reems and reems of regulations to build applications that met very large bureaucratic requirements. I only have two real requirements for my next gig(s): 1) Work remotely. I live in North East Ohio, and I don't plan on leaving, but I wouldn't mind traveling one or two weeks out of every month to service clients who need on-site help. 2) $60.00hr-$∞ USD contracting rate. So what should I do for the next 30 days to achieve this? Should I target some large company and learn the requisite buzzwords to impress them? Should I learn some new language or technology? Polish some skill that I already have? Should I build something using my current skillset, or with some new technology? Should I put a website for my consultancy together to market myself? Should I do that using latest technology x, y, and z? Or should I just slap something up on Tumblr? I'm willing to do anything (moral) over the next 4 weeks to put myself into a position to maximize my income, and I'm open to all and every idea Programmers users may have. Let me hear them.

    Read the article

  • Sun Solaris - Find out number of processors and cores

    - by Adrian
    Our SPARC server is running Sun Solaris 10; I would like to find out the actual number of processors and the number of cores for each processor. The output of psrinfo and prtdiag is ambiguous: $psrinfo -v Status of virtual processor 0 as of: dd/mm/yyyy hh:mm:ss on-line since dd/mm/yyyy hh:mm:ss. The sparcv9 processor operates at 1592 MHz, and has a sparcv9 floating point processor. Status of virtual processor 1 as of: dd/mm/yyyy hh:mm:ss on-line since dd/mm/yyyy hh:mm:ss. The sparcv9 processor operates at 1592 MHz, and has a sparcv9 floating point processor. Status of virtual processor 2 as of: dd/mm/yyyy hh:mm:ss on-line since dd/mm/yyyy hh:mm:ss. The sparcv9 processor operates at 1592 MHz, and has a sparcv9 floating point processor. Status of virtual processor 3 as of: dd/mm/yyyy hh:mm:ss on-line since dd/mm/yyyy hh:mm:ss. The sparcv9 processor operates at 1592 MHz, and has a sparcv9 floating point processor. _ $prtdiag -v System Configuration: Sun Microsystems sun4u Sun Fire V445 System clock frequency: 199 MHZ Memory size: 32GB ==================================== CPUs ==================================== E$ CPU CPU CPU Freq Size Implementation Mask Status Location --- -------- ---------- --------------------- ----- ------ -------- 0 1592 MHz 1MB SUNW,UltraSPARC-IIIi 3.4 on-line MB/C0/P0 1 1592 MHz 1MB SUNW,UltraSPARC-IIIi 3.4 on-line MB/C1/P0 2 1592 MHz 1MB SUNW,UltraSPARC-IIIi 3.4 on-line MB/C2/P0 3 1592 MHz 1MB SUNW,UltraSPARC-IIIi 3.4 on-line MB/C3/P0 _ $more /etc/release Solaris 10 8/07 s10s_u4wos_12b SPARC Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 16 August 2007 Patch Cluster - EIS 29/01/08(v3.1.5) What other methods can I use? EDITED: It looks like we have a 4 processor system with one core each: $psrinfo -p 4 _ $psrinfo -pv The physical processor has 1 virtual processor (0) UltraSPARC-IIIi (portid 0 impl 0x16 ver 0x34 clock 1592 MHz) The physical processor has 1 virtual processor (1) UltraSPARC-IIIi (portid 1 impl 0x16 ver 0x34 clock 1592 MHz) The physical processor has 1 virtual processor (2) UltraSPARC-IIIi (portid 2 impl 0x16 ver 0x34 clock 1592 MHz) The physical processor has 1 virtual processor (3) UltraSPARC-IIIi (portid 3 impl 0x16 ver 0x34 clock 1592 MHz)

    Read the article

  • QT Creator 64-bit Snow Leopard

    - by quadelirus
    I have a bunch of libraries that I need to link against that I installed via macports. They are 64-bit libraries. I'm working on an application written with QT Creator and the .pro is set up. I downloaded the QT SDK for Mac OS X, but it is 32-bit and so the compiled code won't link against the 64-bit binaries that I got from macports. Ok. So I downloaded the QT SDK source and built from source using -arch x86_64. Now I have a 64-bit version of the SDK (I think) but it didn't build a QT Creator app. So. I need to know one of 4 things: Either, 1.) I'm guessing that a simple make command will convince the QT SDK to build the creator for me. If this is true, then what is the command (make creator?). barring that, I need to know 2.) The easiest way to get MacPorts to redownload the libraries that I installed with a 32-bit version (I keep seeing a "+universal" mentioned, but I haven't seen it on a line, and simply calling ports +universal install XYZ doesn't seem to work--perhaps I need to uninstall and reinstall the package?). Also, is this a stupid idea? or 3.) Someone who actually has a prebuilt 64-bit QT SDK installer so I don't have to mess with this. It is ridiculous that QT doesn't already have this available, in my opinion--SL has been out since, what, last August? 4--and this would take the cake.) I don't understand why I can't simply put a "compile-for-64-bit stupid" command directly into the QT pro file and have it build. There isn't really a reason why a compiler compiled in 32-bits couldn't compile to 64-bits is there? Thanks.

    Read the article

  • Cooling Server Closet - No A/C Is Possible

    - by JamesCo
    We're moving into a new office in an old building in London (that's England :) and are walling off a 2m x 1.3m area where the router & telephone equipment currently terminates to use as a server closet. The closet will contain: 2 24-port switches 1 router 1 VSDL modem 1 Dell desktop 1 4-bay NAS 1 HP micro-server 1 UPS Miscellaneous minor telephony boxes. There is no central A/C in the office and there never will be. We can install ducting to the outside quite easily - it's only a couple of metres to the windows, which face a courtyard. My question is whether installing an extractor fan with ducting to the window should be sufficient for cooling? Would an intake fan and intake duct (from the window, too) be required? We don't want to leave a gap in the closet door as that'll let noise out into the office. If we don't have to put a portable A/C unit into the closet, that'd be perfect. The office has about 12 people; London is temperate, average maximum in August is 31 Celsius, 25 Celsius is more typical. The same equipment runs fine in our current office (same building as new office, also no A/C) but it isn't in an enclosed space. I can see us putting say one Dell 2950 tower server into the closet, but no more than that. So, sustained power consumption in the closet would currently be about 800w (I'm guessing); possibly in the future 2kw. The closet will have a ceiling and no windows and be well-insulated. We don't care if the equipment runs hot, so long as it runs and we don't hear it.

    Read the article

  • Ctrl + 1 and Ctrl + 2 key combinations don't work

    - by musicfreak
    I noticed back in August (when I got StarCraft 2) that the key combinations Ctrl + 1 and Ctrl + 2 didn't work. I thought this was weird because Ctrl + 3 and all the other combinations worked fine (including Shift + 1, etc), so I didn't think much of it; I just shrugged it off as a SC2 bug. Now, 4 months later, I decided to play a completely unrelated game--Dawn of War 2--and noticed the same thing: those two specific key combinations don't work. To make sure I wasn't going insane, I tried it in Chrome and a couple other applications, and alas, it didn't work. I remember playing strategy games over the summer before StarCraft 2 and it worked fine. Any idea as to what went wrong? My keyboard, a Microsoft Wireless Keyboard 1000 (I know, insert Microsoft joke here), is a little over a year old, so I'm going to assume it's not dying until proven otherwise. Things I've tried ActiveHotkeys says the key combination is not a global hotkey. Tried another keyboard--still doesn't work. The key combinations do work in a virtual machine (tried with both Windows and Ubuntu as guests).

    Read the article

  • How does Windows 7 DNS client work?

    - by Mark Allison
    I am using a local DHCP and DNS server on my home network on a linux machine. It is running CentOS 6.3 with dnsmasq 2.48. It's all working fine except for local DNS lookups for Windows machines only. I have a mix of Ubuntu, CentOS and Windows machines on the network, some virtual, some physical. I have a machine called boron and the domain is called localdomain If I ping boron from any linux machine, I get [root@lithium lists]# ping -c3 boron PING boron.localdomain (10.0.0.5) 56(84) bytes of data. 64 bytes from boron.localdomain (10.0.0.5): icmp_seq=1 ttl=64 time=0.740 ms 64 bytes from boron.localdomain (10.0.0.5): icmp_seq=2 ttl=64 time=0.478 ms 64 bytes from boron.localdomain (10.0.0.5): icmp_seq=3 ttl=64 time=0.458 ms --- boron.localdomain ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.458/0.558/0.740/0.131 ms If I do it from my Windows 7 machine, I get: Ping request could not find host boron. Please check the name and try again. If I try ping boron.localdomain I get: Pinging boron.localdomain [67.215.65.132] with 32 bytes of data: Reply from 67.215.65.132: bytes=32 time=16ms TTL=57 Reply from 67.215.65.132: bytes=32 time=188ms TTL=57 Reply from 67.215.65.132: bytes=32 time=15ms TTL=57 Reply from 67.215.65.132: bytes=32 time=14ms TTL=57 Ping statistics for 67.215.65.132: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 14ms, Maximum = 188ms, Average = 58ms which is clearly wrong. Why is it going out to the internet? Why can't my windows machine resolve the boron hostname to a FQDN? My Windows machines and linux machines get their network config from DHCP. UPDATE If I do ipconfig /all in Windows, it looks as I would expect: Windows IP Configuration Host Name . . . . . . . . . . . . : lanthanum Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : .localdomain Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : .localdomain Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller Physical Address. . . . . . . . . : 50-E5-49-38-FC-A2 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 10.0.0.57(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : 23 August 2012 13:58:45 Lease Expires . . . . . . . . . . : 24 August 2012 07:58:48 Default Gateway . . . . . . . . . : 10.0.0.6 DHCP Server . . . . . . . . . . . : 10.0.0.6 DNS Servers . . . . . . . . . . . : 10.0.0.6 208.67.222.222 208.67.220.220 NetBIOS over Tcpip. . . . . . . . : Enabled When I do an nslookup I get: Server: carbon.localdomain Address: 10.0.0.6 *** carbon.localdomain can't find boron: Unspecified error However if I do ifconfig -a in Linux I get: [root@nitrogen ~]# ifconfig -a eth0 Link encap:Ethernet HWaddr 00:0C:29:AF:EC:2A inet addr:10.0.0.7 Bcast:10.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:187687 errors:0 dropped:0 overruns:0 frame:0 TX packets:5857 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:23910700 (22.8 MiB) TX bytes:712964 (696.2 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:329894 errors:0 dropped:0 overruns:0 frame:0 TX packets:329894 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:67153143 (64.0 MiB) TX bytes:67153143 (64.0 MiB) and nslookup: [root@nitrogen ~]# nslookup boron Server: 10.0.0.6 Address: 10.0.0.6#53 Name: boron Address: 10.0.0.5 Both machines are on the same network using the same DHCP server. UPDATE 2 I thought the issue was resolved but I am getting intermittent DNS resolving issues but only on my Windows 7 machine. All my linux boxes are fine. This is what happens when I ping and nslookup from Windows to a Windows 2008 Server: C:\Users\mark>nslookup magnesium Server: carbon.localdomain Address: 10.0.0.6 Name: magnesium.localdomain Address: 10.0.0.12 C:\Users\mark>ping magnesium Pinging magnesium.localdomain [67.215.65.132] with 32 bytes of data: Reply from 67.215.65.132: bytes=32 time=267ms TTL=57 Reply from 67.215.65.132: bytes=32 time=162ms TTL=57 Reply from 67.215.65.132: bytes=32 time=510ms TTL=57 Reply from 67.215.65.132: bytes=32 time=146ms TTL=57 Ping statistics for 67.215.65.132: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 146ms, Maximum = 510ms, Average = 271ms And from Linux: [root@beryllium ~]# ping -c4 magnesium PING magnesium.localdomain (10.0.0.12) 56(84) bytes of data. 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=1 ttl=128 time=0.176 ms 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=2 ttl=128 time=0.634 ms 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=3 ttl=128 time=0.685 ms 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=4 ttl=128 time=0.263 ms --- magnesium.localdomain ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3002ms rtt min/avg/max/mdev = 0.176/0.439/0.685/0.223 ms [root@beryllium ~]# nslookup magnesium Server: 10.0.0.6 Address: 10.0.0.6#53 Name: magnesium.localdomain Address: 10.0.0.12 UPDATE 3 I stopped the Windows DNS client on my Windows 7 machine with net stop dnscache and it is now working fine. It would be nice to get DNS working with the DNS client on, but I might be OK without it, what do you think?

    Read the article

  • Picking a degree path...

    - by Chris
    I'll be going to University of South Florida soon, and have to choose between two degrees, I want to head into general Server (IT) administration for a small / medium business. Setting up computers, imaging, managing file servers / logon servers /etc. * I had to change the http to hxxp in order to post. I have two degrees I'm currently choosing between: - BSAS hxxp://www.poly.usf.edu/Academics/AppliedAS/BSAS-IT/Program_of_Study.html - BSIT hxxp://www.poly.usf.edu/IT/ I like the idea of a BSAS because it'll get me out sooner, and then I can work on a few certifications to "match" the BSIT... I'm just worried companies will look at that as a "lesser" degree to a BSIT (or even a CS degree.) What are your guys' thoughts on these two degrees? The BSIT has more math, which I still have about 2 more classes to go through (I'll be heading to USF this August.) while the BSIT doesn't require those 2 extra math classes. I keep on hearing from people that when they hire you for your first job, they don't care which degree you have, as long as it's relevant and it's a 4-year degree, is this true?

    Read the article

  • Drowning in documents - recommend doc management solutions?

    - by Martin Day
    I've been researching document management lately. I want to organise my docs at home and also at the office. Finding affordable solutions one can actually test drive is quite hard. Some that I've downloaded just don't seem to work (testing on brand new Vista PC). I've seen some software on Amazon like Paperport but not really sure what they're like. For home I'd like something to organise files, full text search, good scanner integration, nice interface etc. But for the office it seems harder. I need something that does proper workflow and keeps versions. It will have an audit trail. Documents can be approved, checked in/out etc. I know a few clients who would like something similar. It would be great just to import thousands of documents from a shared drive and get them indexed with dupes killed. I'd like to be super clear about how/where the documents are being stored so that maintenance and backups are clear. My Google/twitter searches lead back to the same tired and vague webpages pushing what look like expensive and custom made solutions. Some might be very good I suppose but it's darn hard to tell. I don't mind a hosted package but all in all I don't think something like Google Docs, as good as it is now, will work. There are too many quirks and missing features (as compared to Office). Being able to work directly with the common Office file formats is important. I've noted a similar sounding question asked here back in August but it didn't seem to turn up too many solutions that I could easily and quickly apply. Also there could have been some changes since then so I feel it's worth asking.

    Read the article

  • How to (properly) back up a live QEMU/KVM VM?

    - by Roman
    I'm currently engineering a backup solution for KVM VM's as an additional measure to traditional backups. Unfortunately, all currently (August 2013) existing solutions I came across so far either: do not ensure a consistent backup of the VM (losing RAM state, creating a dirty image, or other things), or require lengthy downtime (complete VM shutdown while backing up). I'm aware of QEMU/libvirt's functionality of taking snapshots, however, it's not yet usable since: image-internal snapshots present you with an ever-changing image file, resulting in a likely dirty backup (assuming one uses qcow2 images at all). one cannot yet merge a currently active external snapshot into the original backing image ("blockcommit"). Out of the above reasons, I'm now implementing a script that: Saves the VM's state and halts it Sets up a devicemapper snapshot(s) where the VM's disk images and state reside Resumes the VM Mount the snapshot(s) of step 2. Backs up the VM's disk and state (configuration for convenience) Merges back the snapshot(s). If I got everything right, this will take consistent backups of VM's with only seconds (if at all, since 1-3 is fast, possibly sub-second) of downtime. Of course, when restoring, the VM will be way in the past, but at least giving me the option of an orderly shutdown/reboot. Am I missing something with this solution? Or has someone indeed already implemented this?

    Read the article

  • sendmail appends server name to external domains when relaying

    - by Chris
    My server is set to send all email to a corporate relay server. For the company domain, it works perfectly. I've recently found emails being sent to an outside domain are getting the hostname of my server appended to the email prior to being sent. Here is the log entry for one such attempt. Nov 6 09:46:45 myservername sendmail[45023]: rA6EkjiI045023: [email protected], delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30590, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (rA6Ekj2g045037 Message accepted for delivery) Nov 6 09:46:45 myservername sendmail[45061]: rA6Ekj2g045037: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=120885, relay=relay.company.com [x.x.x.x], dsn=2.0.0, stat=Sent (ok: Message 342335947 accepted) Notice the email address difference between it being accepted by my server for delivery (correct email address), and being sent and accepted by the corporate relay (incorrect with server name appended). To make it more interesting, the application on my server uses email for user account verification/activation. In August, this particular user was able to register his account and activate it. I have made no configuration changes to mail since setting the server up over a year ago. DNS is also a corporate service. I've never touched my /etc/resolv.conf configuration. domain company.com nameserver <ip1> nameserver <ip2> search myservername Thanks!

    Read the article

  • Unnecessary Error Message Being Displayed

    - by ThatMacLad
    I've set up a form to update my blog and it was working fine up until about this morning. It keeps on turning up with an Invalid Entry ID error on the edit post page when I click the update button despite the fact that it updates the homepage. All help is seriously appreciated. <html> <head> <title>Ultan's Blog | New Post</title> <link rel="stylesheet" href="css/editpost.css" type="text/css" /> </head> <body> <div class="new-form"> <div class="header"> </div> <div class="form-bg"> <?php mysql_connect ('localhost', 'root', 'root') ; mysql_select_db ('tmlblog'); if (isset($_POST['update'])) { $id = htmlspecialchars(strip_tags($_POST['id'])); $month = htmlspecialchars(strip_tags($_POST['month'])); $date = htmlspecialchars(strip_tags($_POST['date'])); $year = htmlspecialchars(strip_tags($_POST['year'])); $time = htmlspecialchars(strip_tags($_POST['time'])); $entry = $_POST['entry']; $title = htmlspecialchars(strip_tags($_POST['title'])); if (isset($_POST['password'])) $password = htmlspecialchars(strip_tags($_POST['password'])); else $password = ""; $entry = nl2br($entry); if (!get_magic_quotes_gpc()) { $title = addslashes($title); $entry = addslashes($entry); } $timestamp = strtotime ($month . " " . $date . " " . $year . " " . $time); $result = mysql_query("UPDATE php_blog SET timestamp='$timestamp', title='$title', entry='$entry', password='$password' WHERE id='$id' LIMIT 1") or print ("Can't update entry.<br />" . mysql_error()); header("Location: post.php?id=" . $id); } if (isset($_POST['delete'])) { $id = (int)$_POST['id']; $result = mysql_query("DELETE FROM php_blog WHERE id='$id'") or print ("Can't delete entry.<br />" . mysql_error()); if ($result != false) { print "The entry has been successfully deleted from the database."; exit; } } if (!isset($_GET['id']) || empty($_GET['id']) || !is_numeric($_GET['id'])) { die("Invalid entry ID."); } else { $id = (int)$_GET['id']; } $result = mysql_query ("SELECT * FROM php_blog WHERE id='$id'") or print ("Can't select entry.<br />" . $sql . "<br />" . mysql_error()); while ($row = mysql_fetch_array($result)) { $old_timestamp = $row['timestamp']; $old_title = stripslashes($row['title']); $old_entry = stripslashes($row['entry']); $old_password = $row['password']; $old_title = str_replace('"','\'',$old_title); $old_entry = str_replace('<br />', '', $old_entry); $old_month = date("F",$old_timestamp); $old_date = date("d",$old_timestamp); $old_year = date("Y",$old_timestamp); $old_time = date("H:i",$old_timestamp); } ?> <form method="post" action="<?php echo $_SERVER['PHP_SELF']; ?>"> <p><input type="hidden" name="id" value="<?php echo $id; ?>" /> <strong><label for="month">Date (month, day, year):</label></strong> <select name="month" id="month"> <option value="<?php echo $old_month; ?>"><?php echo $old_month; ?></option> <option value="January">January</option> <option value="February">February</option> <option value="March">March</option> <option value="April">April</option> <option value="May">May</option> <option value="June">June</option> <option value="July">July</option> <option value="August">August</option> <option value="September">September</option> <option value="October">October</option> <option value="November">November</option> <option value="December">December</option> </select> <input type="text" name="date" id="date" size="2" value="<?php echo $old_date; ?>" /> <select name="year" id="year"> <option value="<?php echo $old_year; ?>"><?php echo $old_year; ?></option> <option value="2004">2004</option> <option value="2005">2005</option> <option value="2006">2006</option> <option value="2007">2007</option> <option value="2008">2008</option> <option value="2009">2009</option> <option value="2010">2010</option> </select> <strong><label for="time">Time:</label></strong> <input type="text" name="time" id="time" size="5" value="<?php echo $old_time; ?>" /></p> <p><strong><label for="title">Title:</label></strong> <input type="text" name="title" id="title" value="<?php echo $old_title; ?>" size="40" /> </p> <p><strong><label for="password">Password protect?</label></strong> <input type="checkbox" name="password" id="password" value="1"<?php if($old_password == 1) echo " checked=\"checked\""; ?> /></p> <p><textarea cols="80" rows="20" name="entry" id="entry"><?php echo $old_entry; ?></textarea></p> <p><input type="submit" name="update" id="update" value="Update"></p> </form> <p><strong>Be absolutely sure that this is the post that you wish to remove from the blog!</strong><br /> </p> <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> <input type="hidden" name="id" id="id" value="<?php echo $id; ?>" /> <input type="submit" name="delete" id="delete" value="Delete" /> </form> </div> </div> </div> <div class="bottom"></div> </body> </html>

    Read the article

  • DD-WRT No Internet connection over LAN

    - by algorithms
    I flashed the DD-WRT firmware on my TP-Link WR1043ND router and although after cloning the PC's MAC-Address it gets the correct IP from my ISP, the internet connection over LAN just won't work. The strange thing is it does work flawlessly over W-LAN, which tells me the problem should lie somehow in the default LAN settings or the PC. Any idea what the problem might be? UPDATE: It seems the problem is the desktop PC, since the laptop can connect to the interet via ethernet without any problems. ipconfig /all seems totally normal (dhcp, dns, gateway all set to 192.168.1.1) I already tried the following things without success: disabling firewall rebooting router/modem/pc router hard-reset resetting tcp/ip and winsock manual setting of DNS/IP/Gateway Here is the ipconfig /all: Windows-IP-Konfiguration Hostname . . . . . . . . . . . . : Nitro-PC Primäres DNS-Suffix . . . . . . . : Knotentyp . . . . . . . . . . . . : Hybrid IP-Routing aktiviert . . . . . . : Nein WINS-Proxy aktiviert . . . . . . : Nein Ethernet-Adapter LAN-Verbindung 2: Medienstatus. . . . . . . . . . . : Medium getrennt Verbindungsspezifisches DNS-Suffix: Beschreibung. . . . . . . . . . . : TAP-Win32 Adapter V9 Physikalische Adresse . . . . . . : 00-FF-56-CA-66-8D DHCP aktiviert. . . . . . . . . . : Ja Autokonfiguration aktiviert . . . : Ja Ethernet-Adapter LAN-Verbindung: Verbindungsspezifisches DNS-Suffix: Beschreibung. . . . . . . . . . . : Realtek PCIe GBE Family Controller Physikalische Adresse . . . . . . : 48-5B-39-5B-DE-17 DHCP aktiviert. . . . . . . . . . : Ja Autokonfiguration aktiviert . . . : Ja Verbindungslokale IPv6-Adresse . : fe80::6934:b121:9eab:c6ce%10(Bevorzugt) IPv4-Adresse . . . . . . . . . . : 192.168.1.18(Bevorzugt) Subnetzmaske . . . . . . . . . . : 255.255.255.0 Lease erhalten. . . . . . . . . . : Donnerstag, 30. August 2012 10:52:30 Lease läuft ab. . . . . . . . . . : Freitag, 31. August 2012 10:52:30 Standardgateway . . . . . . . . . : 192.168.1.1 DHCP-Server . . . . . . . . . . . : 192.168.1.1 DHCPv6-IAID . . . . . . . . . . . : 239622969 DHCPv6-Client-DUID. . . . . . . . : 00-01-00-01-17-43-0D-B2-48-5B-39-5B-DE-17 DNS-Server . . . . . . . . . . . : 192.168.1.1 NetBIOS über TCP/IP . . . . . . . : Aktiviert Tunneladapter isatap.{56CA668D-9112-4399-9D9A-F1D42F0E52DE}: Medienstatus. . . . . . . . . . . : Medium getrennt Verbindungsspezifisches DNS-Suffix: Beschreibung. . . . . . . . . . . : Microsoft-ISATAP-Adapter Physikalische Adresse . . . . . . : 00-00-00-00-00-00-00-E0 DHCP aktiviert. . . . . . . . . . : Nein Autokonfiguration aktiviert . . . : Ja Tunneladapter Teredo Tunneling Pseudo-Interface: Verbindungsspezifisches DNS-Suffix: Beschreibung. . . . . . . . . . . : Teredo Tunneling Pseudo-Interface Physikalische Adresse . . . . . . : 00-00-00-00-00-00-00-E0 DHCP aktiviert. . . . . . . . . . : Nein Autokonfiguration aktiviert . . . : Ja IPv6-Adresse. . . . . . . . . . . : 2001:0:5ef5:79fd:1432:3dcd:3f57:feed(Bevorzugt) Verbindungslokale IPv6-Adresse . : fe80::1432:3dcd:3f57:feed%12(Bevorzugt) Standardgateway . . . . . . . . . : :: NetBIOS über TCP/IP . . . . . . . : Deaktiviert Tunneladapter isatap.{AD21069D-D2AF-423E-BF59-0B1CD0D235E8}: Medienstatus. . . . . . . . . . . : Medium getrennt Verbindungsspezifisches DNS-Suffix: Beschreibung. . . . . . . . . . . : Microsoft-ISATAP-Adapter #2 Physikalische Adresse . . . . . . : 00-00-00-00-00-00-00-E0 DHCP aktiviert. . . . . . . . . . : Nein Autokonfiguration aktiviert . . . : Ja Tunneladapter 6TO4 Adapter: Medienstatus. . . . . . . . . . . : Medium getrennt Verbindungsspezifisches DNS-Suffix: Beschreibung. . . . . . . . . . . : Microsoft-6zu4-Adapter Physikalische Adresse . . . . . . : 00-00-00-00-00-00-00-E0 DHCP aktiviert. . . . . . . . . . : Nein Autokonfiguration aktiviert . . . : Ja route PRINT IPv4-Routentabelle =========================================================================== Aktive Routen: Netzwerkziel Netzwerkmaske Gateway Schnittstelle Metrik 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.18 10 127.0.0.0 255.0.0.0 Auf Verbindung 127.0.0.1 306 127.0.0.1 255.255.255.255 Auf Verbindung 127.0.0.1 306 127.255.255.255 255.255.255.255 Auf Verbindung 127.0.0.1 306 192.168.1.0 255.255.255.0 Auf Verbindung 192.168.1.18 266 192.168.1.18 255.255.255.255 Auf Verbindung 192.168.1.18 266 192.168.1.255 255.255.255.255 Auf Verbindung 192.168.1.18 266 224.0.0.0 240.0.0.0 Auf Verbindung 127.0.0.1 306 224.0.0.0 240.0.0.0 Auf Verbindung 192.168.1.18 266 255.255.255.255 255.255.255.255 Auf Verbindung 127.0.0.1 306 255.255.255.255 255.255.255.255 Auf Verbindung 192.168.1.18 266 =========================================================================== Stndige Routen: Keine

    Read the article

  • Wired connection periodically disconnects requires ipconfig /release and /renew to reconnect

    - by Sesame
    I just got back into University and after a day of using the internet I suddenly was unable to visit other webpages even though I was still able to chat. I restarted the computer and the internet could no longer visit webpages at all. I got a DNS error from the browser (chrome) and the troubleshooter. The connection comes up as "Network 3" even though It was "Network 2" when it worked. I compared ipconfig /all logs and they seemed identical when it was and was not working. I've found two ways to get internet connection (they no longer work): run ipconfig /release and ipconfig /renew several times set a random address and have the troubleshooter fix the new found dhcp problem (before it says dns cannot fix). I checked and it said DHCP was enabled. But either step one or two usually needs to be repeated several times before the network will change back to "network 2" from the defunct "network 3" and after an hour or so I have problems again. I've tried: Uninstalling windows defender and turning off firewall (windows firewall). Updating Qualcomm Ethernet driver - it is up to date. System Restore (problem resurfaces quickly...it's possible this has something to do with windows update?) Flushing dns and setting dns myself (google one and others). Booting in safe mode with networking...didn't fix anything Reinstalling Ethernet Driver Using other ethernet cable, other wall port. I'm out of ideas. Ipconfig /all: Windows IP Configuration Host Name . . . . . . . . . . . . : NGoller Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : Home Ethernet adapter UConnect: Connection-specific DNS Suffix . : Home Description . . . . . . . . . . . : Qualcomm Atheros AR8151 PCI-E Gigabit Ethernet Controller Physical Address. . . . . . . . . : 90-2B-34-50-33-F4 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPv6 Address. . . . . . . . . . . : fd00::2086:628:f0a1:73c3(Preferred) Temporary IPv6 Address. . . . . . : fd00::c86b:370:b1d9:bd73(Preferred) Link-local IPv6 Address . . . . . : fe80::2086:628:f0a1:73c3%11(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.0.33(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : Wednesday, August 28, 2013 11:58:37 PM Lease Expires . . . . . . . . . . : Thursday, August 29, 2013 11:58:37 PM Default Gateway . . . . . . . . . : 192.168.0.1 DHCP Server . . . . . . . . . . . : 192.168.0.1 DNS Servers . . . . . . . . . . . : 192.168.0.1 NetBIOS over Tcpip. . . . . . . . : Enabled Tunnel adapter isatap.Home: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Home Description . . . . . . . . . . . : Microsoft ISATAP Adapter Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Tunnel adapter Teredo Tunneling Pseudo-Interface: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Teredo Tunneling Pseudo-Interface Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes I'm getting these errors commonly: The IP address lease 155.97.227.199 for the Network Card with network address 0x902B345033F4 has been denied by the DHCP server 10.0.1.1 (The DHCP Server sent a DHCPNACK message). Event filter with query "SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA "Win32_Processor" AND TargetInstance.LoadPercentage > 99" could not be reactivated in namespace "//./root/CIMV2" because of error 0x80041003. Events cannot be delivered through this filter until the problem is corrected. By the way, Operating System: Windows 7 - 64 Bit. Have downloaded latest windows updates. Update: And my two fixes don't work any more :( . This is now what happens when I try to Ipconfig /renew: C:\Users\Nikko\Desktop>ipconfig /renew Windows IP Configuration An error occurred while renewing interface Local Area Connection : The name spec ified in the network control block (NCB) is in use on a remote adapter. The NCB is the data. An error occurred while releasing interface Loopback Pseudo-Interface 1 : The sy stem cannot find the file specified. Update 2: So my internet is randomly working again today. The IP address I had before was a local one while the university address should start with 155... I didn't do anything to the settings...it's bizarre that it all of a sudden works. Thanks!

    Read the article

  • "Win32 exception occurred releasing IUnknown at..." error using Pylons and WMI

    - by Anders
    Hi all, Im using Pylons in combination with WMI module to do some basic system monitoring of a couple of machines, for POSIX based systems everything is simple - for Windows - not so much. Doing a request to the Pylons server to get current CPU, however it's not working well, or atleast with the WMI module. First i simply did (something) this: c = wmi.WMI() for cpu in c.Win32_Processor(): value = cpu.LoadPercentage However, that gave me an error when accessing this module via Pylons (GET http://ip:port/cpu): raise x_wmi_uninitialised_thread ("WMI returned a syntax error: you're probably running inside a thread without first calling pythoncom.CoInitialize[Ex]") x_wmi_uninitialised_thread: <x_wmi: WMI returned a syntax error: you're probably running inside a thread without first calling pythoncom.CoInitialize[Ex] (no underlying exception)> Looking at http://timgolden.me.uk/python/wmi/tutorial.html, i wrapped the code accordingly to the example under the topic "CoInitialize & CoUninitialize", which makes the code work, but it keeps throwing "Win32 exception occurred releasing IUnknown at..." And then looking at http://mail.python.org/pipermail/python-win32/2007-August/006237.html and the follow up post, trying to follow that - however pythoncom._GetInterfaceCount() is always 20. Im guessing this is someway related to Pylons spawning worker threads and crap like that, however im kinda lost here, advice would be nice. Thanks in advance, Anders

    Read the article

  • Apk Expansion Files - Application Licensing - Developer account - NOT_LICENSED response

    - by mUncescu
    I am trying to use the APK Expansion Files extension for Android. I have uploaded the APK to the server along with the extension files. If the application was previously published i get a response from the server saying NOT_LICENSED: The code I use is: APKExpansionPolicy aep = new APKExpansionPolicy(mContext, new AESObfuscator(getSALT(), mContext.getPackageName(), deviceId)); aep.resetPolicy(); LicenseChecker checker = new LicenseChecker(mContext, aep, getPublicKey(); checker.checkAccess(new LicenseCheckerCallback() { @Override public void allow(int reason) { @Override public void dontAllow(int reason) { try { switch (reason) { case Policy.NOT_LICENSED: mNotification.onDownloadStateChanged(IDownloaderClient.STATE_FAILED_UNLICENSED); break; case Policy.RETRY: mNotification.onDownloadStateChanged(IDownloaderClient.STATE_FAILED_FETCHING_URL); break; } } finally { setServiceRunning(false); } } @Override public void applicationError(int errorCode) { try { mNotification.onDownloadStateChanged(IDownloaderClient.STATE_FAILED_FETCHING_URL); } finally { setServiceRunning(false); } } }); So if the application wasn't previously published the Allow method is called. If the application was previously published and now it isn't the dontAllow method is called. I have tried: http://developer.android.com/guide/google/play/licensing/setting-up.html#test-response Here it says that if you use a developer or test account on your test device you can set a specific response, I use LICENSED as response and still get NOT_LINCESED. Resetting the phone, clearing google play store cache, application data. Changing the versioncode number in different combinations still doesn't work. Edit: In case someone else was facing this problem I received an mail from the google support team We are aware that newly created accounts for testing in-app billing and Google licensing server (LVL) return errors, and are working on resolving this problem. Please stay tuned. In the meantime, you can use any accounts created prior to August 1st, 2012, for testing. So it seems to be a problem with their server, if I use the main developer thread everything works fine.

    Read the article

  • Javascript help fixing a working time ago date function to simply show seconds ago.

    - by Scarface
    Hey guys quick question, I am using a javascript function and it works except I can only make it say 0 seconds ago, when the time is under a minute. Can anyone quickly explain what I am doing wrong? function handleDate( timestamp ) { var n=new Date(), t, ago = " "; if( timestamp ) { t = Math.round( (n.getTime()/1000 - timestamp)/60 ); ago += handleSinceDateEndings( t, timestamp ); } else { ago += ""; } return ago; } function handleSinceDateEndings( t, original_timestamp ) { var ago = " ", date; if( t <= 1 ) { ago += t + " seconds ago"; } else if( t<60) { ago += t + " mins ago"; } else if( t>= 60 && t<= 120) { ago += Math.floor( t / 60 ) + " hour ago" } else if( t<1440 ) { //console.log(t) ago += Math.floor( t / 60 ) + " hours ago"; } else if( t< 2880) { ago += "1 day ago"; } else if( t > 2880 && t < 4320 ) { ago += "2 days ago"; } else { date = new Date( parseInt( original_timestamp )*1000 ) ago += months[ date.getMonth() ] + " " + date.getDate(); } return ago; } var months = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"];

    Read the article

  • google appengine local datastore integration testing with spring

    - by mirror303
    Hi all, I want to write some integration tests to see how my spring-managed DAO's behave when talking to the appengine datastore. Following the spring manual I will be providing my test-classes with the proper annotations: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { "classpath:applicationContext.xml" }) After a lot of browsing I found this blog post dating back to august '09 from somebody doing exactly what I want to achieve. It involves writing a TestEnvironment class that implements ApiProxy.Environment plus talking to ApiProxyLocalImpl. However, if I look at the current docs (for version 1.3.1), it seems that this has been replaced by newing an instance of the framework provided LocalDatastoreServiceTestConfig which is passed to a LocalServiceTestHelper. It is too bad that the appengine docs don't show an example how to do this with JPA because then the spring wiring would be trivial. Trying to follow the route outlined in the blog posting has me running into a compiler messages telling me that classes such as ApiProxyLocalImpl are not visible by me. Hence, there must be a new way of doing it, which probably involves the LocalServiceTestHelper. My question: Does anybody know how? I know I will need to configure an EntityManagerFactory and provide it with the Datastore connection somehow... but how? :)

    Read the article

  • Passenger: "Missing these required gems redgreen"

    - by Michael Stum
    Hello, total ruby newbie, trying to setup a Rails/MongoDB application on Mac OS X Snow leopard. Installed Ruby 1.9.1 and RubyGems 1.3.7, which ruby and which gem point to the same directory. I'm using the Snow Leopard built-in apache and Passenger 2.2.11. I'm using the rails template from the mongo-site which seems to work okay overall. The exact error that passenger gives me is: /Users/User/Sites/feuerapp/vendor/rails/railties/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement **Notice: C extension not loaded. This is required for optimum MongoDB Ruby driver performance. You can install the extension as follows: gem install bson_ext If you continue to receive this message after installing, make sure that the bson_ext gem is in your load path and that the bson_ext and mongo gems are of the same version. Missing these required gems: redgreen You're running: ruby 1.9.1.376 at /usr/local/bin/ruby rubygems 1.3.7 at /Users/User/.gem/ruby/1.9.1, /usr/local/lib/ruby/gems/1.9.1 Runrake gems:installto install the missing gems. The weird thing is that redgreen is installed and looks fine to me: Dahlia:feuerapp User$ ls -la vendor/gems/ total 0 drwxr-xr-x 7 User staff 238 May 18 22:56 . drwxr-xr-x 5 User staff 170 May 18 23:00 .. drwxr-xr-x 11 User staff 374 May 18 22:56 factory_girl-1.2.4 drwxr-xr-x 11 User staff 374 May 18 22:56 mocha-0.9.8 drwxr-xr-x 7 User staff 238 May 18 22:56 mongo_mapper-0.7.6 drwxr-xr-x 7 User staff 238 May 18 22:56 redgreen-1.2.2 drwxr-xr-x 11 User staff 374 May 18 22:56 shoulda-2.10.3 Commenting out this line in environment.rb "solves" the issue, but that's not really want I want: config.gem 'redgreen' I don't understand anything of gems yet, but from my limited understanding, redgreen should be there and found?

    Read the article

  • iPhone - Failed to save the videos metadata to the filesystem.

    - by cameron
    My application uses the UIImagePicker to allow the user to use the camera and capture a photo to edit/etc. I am getting the error message below: 2010-02-03 10:41:24.018 LivingRoom[5333:5303] Failed to save the videos metadata to the filesystem. Maybe the information did not conform to a plist. Program received signal: “EXC_BAD_ACCESS”. A search in Google brings up a number of threads in various forums, with no ultimate response/root cause/suggestions on how to fix/debug. An example is the thread below with code which is very similar to my app: http://groups.google.com/group/iphonesdkdevelopment/browse_thread/thread/6b7b396c62bef398 The error disappears for a while (10 tests in a row, no errors) if I reboot the iPhone. I have not been able to determine what makes it reoccur after a reboot, but it does. I am not using the video source and the fact that a reboot solves the problem for a while points to some sort of mem leak (perhaps?). The problem always shows up on both the iPhone (even after the reboot) and the simulator when choosing a photo from the album, but the app does not crash on the iPhone or the simulator. The same app with exact code did not have the error message when compiled using SDK 3.0 (last August/September). But 3.1.x has always produced the error message, which means that once a week or so the iPhone needs to be rebooted for the error to disappear. The users are not happy with that solution any longer!! Any suggestion/clues would be greatly appreciated.

    Read the article

  • testing devise with shoulda and machinist

    - by mattherick
    hello! I´d like to test my app with shoulda and machinist. I use the devise authentification gem. I get following error: $ ruby unit/page_test.rb c:/Ruby/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be rem oved on or after August 2010. Use #requirement c:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:443:in load_missing_constant': uninitialized constant Admins (N ameError) from c:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:80:inconst_missing' from c:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:92:in const_missing' from c:/Users/Mattherick/Desktop/heimspiel/heimspiel_app/app/controllers/admins_controller.rb:1 from c:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:ingem_original_require' from c:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in require' from c:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:inrequire' from c:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:265:in require_or_load' from c:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:224:independ_on' ... 12 levels... from ./unit/../test_helper.rb:2 from c:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' from c:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' from unit/page_test.rb:1 Somebody an idea what´s wrong? If I don´t use devise my tests are okay. And my second question: Does somebdoy has a good tutorial for increasing different roles in the devise gem? If I generate my own views and add a few attributes to my devise-model, they won´t be save in the database. I read the docu at github, but don´t really checked it. mattherick

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24  | Next Page >