Search Results

Search found 4417 results on 177 pages for 'low on totem pole'.

Page 100/177 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Google music search: a better way to listen.

    - by anirudha
    somebody who want to listen music  pay much more to some online music store for online listening. otherwise they experience bad or low quality on YouTube. who is illegal  because uploader not have a permission or right to upload the document and their is no guarantee that they not put their ads or quality as same. now forget YouTube and all other because Google music search is much better just go their search the song by movies name or song and just click and listen. the quality is much better then other but it is not Google. the result they put comes from other website. i feel a thing goes wrong in Google music  search  that if i search “sajda” they never show me result about “sadka” because the word in common life use as same both. but the song may be starting from  “sajda” or “sadka”. i thing that they put the link that Do you means “Sadka” when i search sajda that it is better thing just like many online book store show the different keyword related to your keyword when you search their. like you search for a book on online book store they show you some different keyword when they serve the result and show related product or books when you go to a product page. after thinking all it is a better option for user to feel a better quality music without search hassle.

    Read the article

  • .NET development on Macs

    - by Jeff
    I posted the “exciting” conclusion of my laptop trade-ins and issues on my personal blog. The links, in chronological order, are posted below. While those posts have all of the details about performance and software used, I wanted to comment on why I like using Macs in the first place. It started in 2006 when Apple released the first Intel-based Mac. As someone with a professional video past, I had been using Macs on and off since college (1995 graduate), so I was never terribly religious about any particular platform. I’m still not, but until recently, it was staggering how crappy PC’s were. They were all plastic, disposable, commodity crap. I could never justify buying a PowerBook because I was a Microsoft stack guy. When Apple went Intel, they removed that barrier. They also didn’t screw around with selling to the low end (though the plastic MacBooks bordered on that), so even the base machines were pretty well equipped. Every Mac I’ve had, I’ve used for three years. Other than that first one, I’ve also sold each one, for quite a bit of money. Things have changed quite a bit, mostly within the last year. I’m actually relieved, because Apple needs competition at the high end. Other manufacturers are finally understanding the importance of industrial design. For me, I’ll stick with Macs for now, because I’m invested in OS X apps like Aperture and the Mac versions of Adobe products. As a Microsoft developer, it doesn’t even matter though… with Parallels, I Cmd-Tab and I’m in Windows. So after three and a half years with a wonderful 17” MBP and upgraded SSD, it was time to get something lighter and smaller (traveling light is critical with a toddler), and I eventually ended up with a 13” MacBook Air, with the i7 and 8 gig upgrades, and I love it. At home I “dock” it to a Thunderbolt Display. A new laptop .NET development on a Retina MacBook Pro with Windows 8 Returning my MacBook Pro with Retina display .NET development on a MacBook Air with Windows 8

    Read the article

  • Organising data access for dependency injection

    - by IanAWP
    In our company we have a relatively long history of database backed applications, but have only just begun experimenting with dependency injection. I am looking for advice about how to convert our existing data access pattern into one more suited for dependency injection. Some specific questions: Do you create one access object per table (Given that a table represents an entity collection)? One interface per table? All of these would need the low level Data Access object to be injected, right? What about if there are dozens of tables, wouldn't that make the composition root into a nightmare? Would you instead have a single interface that defines things like GetCustomer(), GetOrder(), etc? If I took the example of EntityFramework, then I would have one Container that exposes an object for each table, but that container doesn't conform to any interface itself, so doesn't seem like it's compatible with DI. What we do now, in case it helps: The way we normally manage data access is through a generic data layer which exposes CRUD/Transaction capabilities and has provider specific subclasses which handle the creation of IDbConnection, IDbCommand, etc. Actual table access uses Table classes that perform the CRUD operations associated with a particular table and accept/return domain objects that the rest of the system deals with. These table classes expose only static methods, and utilise a static DataAccess singleton instantiated from a config file.

    Read the article

  • Comparing the Performance of Visual Studio's Web Reference to a Custom Class

    As developers, we all make assumptions when programming. Perhaps the biggest assumption we make is that those libraries and tools that ship with the .NET Framework are the best way to accomplish a given task. For example, most developers assume that using ASP.NET's Membership system is the best way to manage user accounts in a website (rather than rolling your own user account store). Similarly, creating a Web Reference to communicate with a web service generates markup that auto-creates a proxy class, which handles the low-level details of invoking the web service, serializing parameters, and so on. Recently a client made us question one of our fundamental assumptions about the .NET Framework and Web Services by asking, "Why should we use proxy class created by Visual Studio to connect to a web service?" In this particular project we were calling a web service to retrieve data, which was then sorted, formatted slightly and displayed in a web page. The client hypothesized that it would be more efficient to invoke the web service directly via the HttpWebRequest class, retrieve the XML output, populate an XmlDocument object, then use XSLT to output the result to HTML. Surely that would be faster than using Visual Studio's auto-generated proxy class, right? Prior to this request, we had never considered rolling our own proxy class; we had always taken advantage of the proxy classes Visual Studio auto-generated for us. Could these auto-generated proxy classes be inefficient? Would retrieving and parsing the web service's XML directly be more efficient? The only way to know for sure was to test my client's hypothesis. Read More >

    Read the article

  • ORA-600 Troubleshooting

    - by [email protected]
    Have you observed an ORA-0600 or ORA-07445 reported in your alert log? The ORA-600 error is the generic internal error number for Oracle program exceptions. It indicates that a process has encountered a low-level, unexpected condition. The ORA-600 error statement includes a list of arguments in square brackets: ORA 600 "internal error code, arguments: [%s], [%s],[%s], [%s], [%s]" The first argument is the internal message number or character string. This argument and the database version number are critical in identifying the root cause and the potential impact to your system.  The remaining arguments in the ORA-600 error text are used to supply further information (e.g. values of internal variables etc).   Looking for the best way to diagnose? There is an ORA-600 Troubleshooter Tool available in My Oracle Support.  This tool will lead you to applicable content in My Oracle Support on the problem and can be used to investigate the problem with argument data from the error message or you can pull out the first 10 or 15 stack pointers from the associated trace file to match up against known bugs. Note 153788.1 ORA-600/ORA-7445 TroubleshooterNote 1082674.1 A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video] Also, take a quick look at the Master Note for Diagnosing ORA-600 ( MasterNoteORA600.docx) for some tips on diagnosing.

    Read the article

  • ATI Catalyst driver 12.8 is not using hardware acceleration on Precise

    - by Jack Wright
    I've been using Ubuntu and ATI Catalyst for years. On my clean install of Ubuntu 12.04 I've noticed that Catalyst 12.6 and then 12.8 are not actually using my HD5750 GPU for hardware acceleration - high CPU usage, zero GPU load. Everything installed correctly with no hassles, fglrxinfo and vainfo are correct as per this HowTo for Precise. I have an Ubuntu 10.04 with Catalyst 12.6 installation on the same hardware which does use the GPU - low CPU usage, high GPU load when transcodeing video files or playing video content. The VA-API drivers are not installed on the 10.04 build. They are not mentioned in this HowTo for Lucid. fgl_glxgears frame rates on Precise are a fifth of the rates on Lucid. LUCID jw@Kworld:~$ fgl_glxgears Using GLX_SGIX_pbuffer 16867 frames in 5.0 seconds = 3373.400 FPS 12523 frames in 5.0 seconds = 2504.600 FPS 13763 frames in 5.0 seconds = 2752.600 FPS PRECISE jw@NewWorld12:~$ fgl_glxgears Using GLX_SGIX_pbuffer 12905 frames in 5.0 seconds = 2581.000 FPS 3230 frames in 5.0 seconds = 646.000 FPS 517 frames in 5.0 seconds = 103.400 FPS 518 frames in 5.0 seconds = 103.600 FPS 6489 frames in 5.0 seconds = 1297.800 FPS This is glxgears running in fullscreen. In Lucid (10.04) I can't see the gears, they are spinning so fast, but in Precise (12.04) they are really sluggish. Has anyone else noticed a problem like this? Cheers, Jack.

    Read the article

  • User Acceptance Testing Defect Classification when developing for an outside client

    - by DannyC
    I am involved in a large development project in which we (a very small start up) are developing for an outside client (a very large company). We recently received their first output from UAT testing of a fairly small iteration, which listed 12 'defects', triaged into three categories : Low, Medium and High. The issue we have is around whether everything in this list should be recorded as a 'defect' - some of the issues they found would be better described as refinements, or even 'nice-to-haves', and some we think are not defects at all. They client's QA lead says that it is standard for them to label every issues they identify as a defect, however, we are a bit uncomfortable about this. Whilst the relationship is good, we don't see a huge problem with this, but we are concerned that, if the relationship suffers in the future, these lists of 'defects' could prove costly for us. We don't want to come across as being difficult, or taking things too personally here, and we are happy to make all of the changes identified, however we are a bit concerned especially as there is a uneven power balance at play in our relationship. Are we being paranoid here? Or could we be setting ourselves up for problems down the line by agreeing to this classification?

    Read the article

  • An Oracle Event for Your Facility & Equipment Maintenance Staff

    - by Mark Rosenberg
    The 7th Annual Oracle Maintenance Summit will occur February 4 – 6, 2013 at the Hyatt Regency San Francisco. This year, the Maintenance Summit will be one of the major pillars of a larger Oracle Value Chain Summit. What makes this event different from the other events hosted by Oracle and the PeopleSoft Community’s various user groups is that it is specifically meant to provide a venue for the facility and equipment maintenance community to talk about all things related to maintenance.  Maintenance Planners, Maintenance Schedulers, Vice Presidents and Directors of Physical Plant, Operations Managers, Craft Supervisors, IT management, and IT analysts typically attend this event and find it to be a very valuable experience. The Maintenance pillar will provide the same atmosphere and opportunity to hear from PeopleSoft Maintenance Management customers, Oracle Product Strategy, and partners, as in past years.  For more information, you can access the registration website for the Value Chain Summit. For existing PeopleSoft Maintenance Management customers…if you are interested in participating in the PeopleSoft Maintenance Management Focus Group in which Oracle discusses product roadmap topics with the community of customers who have licensed the PeopleSoft Maintenance Management application, please contact [email protected], [email protected], or [email protected]. The Focus Group will meet on February 7th, and attendance is by invitation only.We look forward to seeing you in San Francisco! P.S.  The Early Bird registration fee is $195. Register before December 31 to take advantage of this introductory low price, as the registration fee will go up to $295 after that date.

    Read the article

  • Have unit test generators helped you when working with legacy code?

    - by Duncan Bayne
    I am looking at a small (~70kLOC including generated) C# (.NET 4.0, some Silverlight) code-base that has very low test coverage. The code itself works in that it has passed user acceptance testing, but it is brittle and in some areas not very well factored. I would like to add solid unit test coverage around the legacy code using the usual suspects (NMock, NUnit, StatLight for the Silverlight bits). My normal approach is to start working through the project, unit testing & refactoring, until I am satisfied with the state of the code. I've done this many times in the past, and it's worked well. However, this time I'm thinking of using a test generator (in particular Pex) to create the test framework, then manually fleshing it out. My question is: have you used unit test generators in the past when commencing work on a legacy codebase, and if so, would you recommend them? My fear is that the generated tests will miss the semantic nuances of the code-base, leading to the dreaded situation of having tests for the sake of the coverage metric, rather than tests which clearly express the intended behaviour in code.

    Read the article

  • Javascript: Machine Constants Applicable?

    - by DavidB2013
    I write numerical routines for students of science and engineering (although they are freely available for use by anybody else as well) and am wondering how to properly use machine constants in a JavaScript program, or if they are even applicable. For example, say I am writing a program in C++ that numerically computes the roots of the following equation: exp(-0.7x) + sin(3x) - 1.2x + 0.3546 = 0 A root-finding routine should be able to compute roots to within the machine epsilon. In C++, this value is specified by the language: DBL_EPSILON. C++ also specifies the smallest and largest values that can be held by a float or double variable. However, how does this convert to JavaScript? Since a Javascript program runs in a web browser, and I don't know what kind of computer will run the program, and JavaScript does not have corresponding predefined values for these quantities, how can I implement my own version of these constants so that my programs compute results to as much accuracy as allowed on the computer running the web browser? My first draft is to simply copy over the literal constants from C++: FLT_MIN: 1.17549435082229e-038 FLT_MAX: 3.40282346638529e+038 DBL_EPSILON: 2.2204460492503131e-16 I am also willing to write small code blocks that could compute these values for each machine on which the program is run. That way, a supercomputer might compute results to a higher accuracy than an old, low-level, PC. BUT, I don't know if such a routine would actually reach the computer, in which case, I would be wasting my time. Anybody here know how to compute and use (in Javascript) values that correspond to machine constants in a compiled language? Is it worth my time to write small programs in Javascript that compute DBL_EPSILON, FLT_MIN, FLT_MIN, etc. for use in numerical routines? Or am I better off simply assigning literal constants that come straight from C++ on a standard Windows PC?

    Read the article

  • In the days of modern computing, in 'typical business apps' - why does performance matter?

    - by Prog
    This may seem like an odd question to some of you. I'm a hobbyist Java programmer. I have developed several games, an AI program that creates music, another program for painting, and similar stuff. This is to tell you that I have an experience in programming, but not in professional development of business applications. I see a lot of talk on this site about performance. People often debate what would be the most efficient algorithm in C# to perform a task, or why Python is slow and Java is faster, etc. What I'm trying to understand is: why does this matter? There are specific areas of computing where I see why performance matters: games, where tens of thousands of computations are happening every second in a constant-update loop, or low level systems which other programs rely on, such as OSs and VMs, etc. But for the normal, typical high-level business app, why does performance matter? I can understand why it used to matter, decades ago. Computers were much slower and had much less memory, so you had to think carefully about these things. But today, we have so much memory to spare and computers are so fast: does it actually matter if a particular Java algorithm is O(n^2)? Will it actually make a difference for the end users of this typical business app? When you press a GUI button in a typical business app, and behind the scenes it invokes an O(n^2) algorithm, in these days of modern computing - do you actually feel the inefficiency? My question is split in two: In practice, today does performance matter in a typical normal business program? If it does, please give me real-world examples of places in such an application, where performance and optimizations are important.

    Read the article

  • Seeking an C/C++ OBJ geometry read/write that does not modify the representation

    - by Blake Senftner
    I am seeking a means to read and write OBJ geometry files with logic that does not modify the geometry representation. i.e. read geometry, immediately write it, and a diff of the source OBJ and the one just written will be identical. Every OBJ writing utility I've been able to find online fails this test. I am writing small command line tools to modify my OBJ geometries, and I need to write my results, not just read the geometry for rendering purposes. Simply needing to write the geometry knocks out 95% of the OBJ libraries on the web. Also, many of the popular libraries modify the geometry representation. For example, Nat Robbin's GLUT library includes the GLM library, which both converts quads to triangles, as well as reverses the topology (face ordering) of the geometry. It's still the same geometry, but if your tool chain expects a given topology, such as for rigging or morph targets, then GLM is useless. I'm not rendering in these tools, so dependencies like OpenGL or GLUT make no sense. And god forbid, do not "optimize" the geometry! Redundant vertices are on purpose for maintaining oneself on cache with our weird little low memory mobile devices.

    Read the article

  • What are the best Microsoft Certifications to start with?

    - by emragins
    Background I have a bachelors in math and a certification. in C++ from 2007. Since then I've spent a lot of time working with python, C#, and started going through the ASP.NET certification materials. I'm starting to realize that the certification is going to take longer than anticipated and I'm not sure I want to spend the next 4-5 months studying before I have it completed. Most of resume shows teaching/tutoring experience with some low-level administration thrown in. Question If I want to get any programming position, which certifications would be best to start with? What would be the quickest and easiest to obtain, yet represent value for my employer? Are certifications even the way to go? If not, what would you suggest? Update I have several programs that I show off when I can (mostly games), and I'm about 75% through a C# application I hope to have done in the next week. Since most employers simply ask for a resume and not samples, what would be the best way to present the work to them?

    Read the article

  • How will my Electronic Engineering degree be received in the Canadian Game Development market? [closed]

    - by Harikawashi
    I have a Electronic Engineering with Computer Science Degree from a reputable South African university. The EE with CS degree is basically Electronic Engineering, with some of the high voltage subjects thrown out and replaced with computer science subjects - mostly quite theoretical, but not in too much depth. I went on to earn a Masters Degree in Digital Signal Processing, focussing on Speech Recognition in Educational Applications. I have always loved programming - I taught myself QBASIC when I was in primary school, I learned Java at school, did some low level C at University, and taught myself C# and Python while doing my post graduate degree. C# is currently my strong suit, I think I am pretty capable with it. I have two years work experience in Namibia - working as a consulting electrical engineer (no software content whatsoever) and also developing C# desktop applications for the company I work for. I would like to move to Canada next year and work in the Game Development Industry as programmer or software engineer. My interests in particular are towards the more mathematical applications, like game and physics engines, or statistical disciplines like artificial intelligence. However, these are passions - not areas in which I have any work experience. So the question: How well will my BEngEE&CS and MScEng be received in the game industry? Seeing as it's not a pure software degree and I have no official software development work experience?

    Read the article

  • Microsoft IntelliMouse episodic pauses

    - by Rob Hills
    I have a Microsoft IntelliMouse connected via USB to a computer (directly, NOT via hub) currently running Ubuntu 11.10, but this problem also existed before we upgraded from 10.10. Every now and then (apparently randomly) the computer "pauses" for anything up to a few seconds. This usually occurs after a mouse movement and during the pause, the computer is completely unresponsive to mouse or keyboard. lsusb shows: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 002: ID 0409:0058 NEC Corp. HighSpeed Hub Bus 001 Device 004: ID 05e3:0605 Genesys Logic, Inc. USB 2.0 Hub [ednet] Bus 003 Device 013: ID 045e:001e Microsoft Corp. IntelliMouse Explorer Bus 001 Device 005: ID 04a9:1097 Canon, Inc. PIXMA iP5000 Bus 001 Device 006: ID 0a5c:200a Broadcom Corp. Bluetooth dongle Bus 001 Device 007: ID 0911:1c57 Philips Speech Processing Bus 001 Device 008: ID 04a9:2219 Canon, Inc. CanoScan 9950F so the mouse appears to be correctly identified. Syslog episodically shows the following sequence: Jan 15 11:48:32 kayes-computer kernel: [10588.512036] usb 3-1: USB disconnect, device number 10 Jan 15 11:48:33 kayes-computer kernel: [10589.248026] usb 3-1: new low speed USB device number 11 using uhci_hcd Jan 15 11:48:33 kayes-computer mtp-probe: checking bus 3, device 11: "/sys/devices/pci0000:00/0000:00:1d.1/usb3/3-1" Jan 15 11:48:33 kayes-computer kernel: [10589.448596] input: Microsoft Microsoft IntelliMouse® Explorer as /devices/pci0000:00/0000:00:1d.1/usb3/3-1/3-1:1.0/input/input11 Jan 15 11:48:33 kayes-computer kernel: [10589.448706] generic-usb 0003:045E:001E.000B: input,hidraw0: USB HID v1.00 Mouse [Microsoft Microsoft IntelliMouse® Explorer] on usb-0000:00:1d.1-1/input0 Jan 15 11:48:33 kayes-computer mtp-probe: bus: 3, device: 11 was not an MTP device though I can't confirm if these are directly associated with the "pauses". Any thoughts on what might be causing this or what else I can do to diagnose the problem?

    Read the article

  • Oracle JDK 7u10 released with new security features

    - by Henrik Stahl
    A few days ago, we released JRE and JDK 7 update 10. This release adds support for the following new platforms: Windows 8 on x86-64. Note that Modern UI (aka Metro) mode is not supported. Internet Explorer 10 on Windows 8. Mac OS X 10.8 (Mountain Lion) This release also introduces new features that provide enhanced security for Java applet and webstart applications, specifically: The Java runtime tracks if it is updated to the latest security baseline. If you try to execute an unsigned applet with an outdated version of Java, a warning dialog will prompt you to update before running the applet. The Java runtime includes a hardcoded best before date. It is assumed that a new version will be released before this date. If the client has not been able to check for an update prior to this date, the Java runtime will assume that it is insecure and start warning the user prior to executing any applets. The Java control panel now includes an option to set the desired security level on a low-medium-high-very high scale, as well as an option to disable Java applets and webstart entirely. This level controls things such as if the Java runtime is allowed to execute unsigned code, and if so what type of warning will be displayed to the user. More details on the security settings can be found in the documentation. See below for a sample screenshot. The new update of the JRE and the JDK are available via OTN. To learn more about the release please visit the release notes.

    Read the article

  • Internet Explorer will not open Office files

    - by geekrutherford
    An issue was brought to my attention today at work where certain users were unable to open Office files (specifically Excel) from Internet Explorer 7.   The user would click on a button which simply generated an inline JS call to open a pop-up pointing to the .xlsx file on the server. IE would open the pop-up and then shortly thereafter the pop-up would disappear without the file ever opening.   I tweaked the security settings in the users browser...added the site to the list of trusted sites and lowered the security settings to Medium-Low. This allowed IE to at least prompt with the Save or Open message. Clicking either open resulted in "Internet Explorer Could Not Open the Site...".   Perturbed, I retreated back to Geek Central (aka my desk) and modified my application such that instead of simply pointing the browser to the file and now used Response.TransmitFile() to stream it to the browser instead. I thought to myself "this is perfect, it has to work!!!". Alas, no luck.   Bewildered and confused and returned to the lone users computer and started looking around the various IE options. I stumbled upon "Clear SSL State" under the "Content" tab. This appears to clear out all SSL certificates on the client forcing it to refresh. Doing this in concert with resetting the security levels for all zones back to their defaults seemed to do the trick.

    Read the article

  • The Science Behind Salty Airline Food

    - by Jason Fitzpatrick
    In this collection, Artist Signe Emma combines a scientific overview of the role salt plays in airline food with electron microscope scans of salt crystals arranged to look like the views from an airplane–a rather clever and visually stunning way to deliver the message. Attached to the collection is this explaination of why airlines load their snacks and meals with salt: White noise consists of a random collection of sounds at different frequencies and scientists have demonstrated that it is capable of diminishing the taste of salt. At low-pressure conditions, higher taste and odour thresholds of flavourings are generally observed. At 30.000 feet the cabin humidity drops by 15%, and the lowered air pressure forces bodily fluids upwards. With less humidity, people have less moisture in their throat, which slows the transport of odours to the brains smell and taste receptors. That means that if a meal should taste the same up in the air, as on ground it needs 30% of extra salt. To combat the double assault on our sense of taste, the airlines boost the salt content to compensate. For more neat microscope scans as high-altitude view photographs, hit up the link below. How to Play Classic Arcade Games On Your PC How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8

    Read the article

  • HP Notebook Pavilion g6-2101sl freeze

    - by StErMi
    I just bought this notebook and I've already installed in a new partition Ubuntu 12.04 LTS with 6gb of swap memory. UPDATE2: This is the laptop configuration: http://h10025.www1.hp.com/ewfrf/wc/document?cc=it&lc=it&dlc=it&tmp_geoLoc=true&docname=c03397517 Sometimes (without any special conditions) Ubuntu freezes. My mouse is blocked, the UI is blocked, ALT+F1 to kill something or to restart is blocked, I can't really do anything... I've also tried and it freezes with: Ubuntu 3D Ubuntu 2D Gnome Shell and it freezes both with low and high load. I can only press on power button (physically) and restart my laptop (and this is not the correct way to do things). I'm using this laptop for work, so I need a stable OS without this freeze. Someone knows how to solve this problem? UPDATE: /var/log/messages is empty /var/log/kernel.log - http://paste.ubuntu.com/1220182/ /var/log/Xorg.0.log - http://paste.ubuntu.com/1220186/ I just installed propetary driver from ATI, it crash anyway. This morning I started laptop, I enabled wireless, opened dropbox and chrome - freeze. When it freeze I cannot: Do ctrl+alt+f1 to get console access AltF2 + r to reload session Alt+Print + RESUB to restart I totally freezed.

    Read the article

  • Alienware M17x R3: Possible downclock

    - by Ywen
    I installed recently Kubuntu 11.10 32 bits (had graphics driver issues, wanted to try on 32 bits version) on my new Alienware M17x, with a Core i7-2670QM CPU. Cores are supposed to be clocked at 2.2 GHz, however the output of $ cat /proc/cpuinfo | grep -i "hz" gives me: model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 If useful, the AC adapter is plugged in (yet the ouput is the same when the computer is powered only by the battery) and I have Firefox and Eclipse running. Does /proc/cpuinfo reflect a possible automatic downclock made to save power if processor load is low or is this output abnormal? EDIT: Ok, I checked and yes, the ouput does vary in function of the load. I reach 2.2 GHz when needed. But my following problem remains. I was checking my CPU clocking because I experienced poor performances when reading 720p video files on Ubuntu with VLC or mplayer when on battery (and I believe VLC by default only uses CPU, not GPU to decode), whereas I haven't got such problems with VLC on Windows (which made me think it wasn't coming from a BIOS option, plus every option in the BIOS regarding the CPU is turned ON).

    Read the article

  • Which language meets my needs? [closed]

    - by Gerald Goward
    I am a junior C# developer, working for half a year now. In my company I am working on some enterprise projects and after doing it for quite some time I understood that I dont like enterprise projects. I have my own browser-game written in PHP+MySql with some simple HTML+CSS and I have 300 active (those, who entered the game at least once per 5 days) players currently :) After thinking quite some time I understood that I am interested in: 1). Web-development AND 2). standalone programs (but not enterprise ones). 3). Development for mobile platforms is also nice, Android/iOs. 1st and 2nd categories are what I want the most. Android/iOs is good too. I am NOT interested in big systems which are hard to integrate, I am not interested in enterprise systems. In future I would like to start my own business/projects. I would like to create my own projects or/and create a small programmers company to create and release own products. Please tell me what programming language(s)/technologies would you advice me for it? Thanks alot! UPD: It's NOT a "which language is better" or any flame/holywar generating topic since I ask for language that suits my EXACT needs better. I believe C++ is better for low-level coding, while PHP is good for web-development and Object-C being made for iOs. I am still newbie at programming so dont hate me please.

    Read the article

  • Ubuntu 11.10 power management does not recognize removal of power supply!

    - by sema
    I have a Lenovo Ideapad Z370 with Ubuntu 11.10 and the battery status indicator shows wrong information. Problem: The indicator always shows that the power supply is connected, even if it's not connected. The battery charges and discharges normally. However, the status information is wrong. When charging, the "time to charge" decreases, and when discharging the "time to charge" increases. If the power supply is connected the power statistics show: "Supply Yes" "Online Yes" If it is not connected it shows: "Supply Yes" "Online No" My trials: I tried reinstalling the indicator applet, but that doesn't help. Searching for solutions or similar problems didn't point out any help. Background: The problem occured after I switched the battery mode in Windows. (I use a dual boot system.) Lenovo drivers allow a "battery runtime mode" for maximum runtime and a "battery health mode" for maximum battery lifetime. I initially used the runtime mode, tried the health mode for some time, but switched back to the runtime mode. The problem occured after switching to health mode. Does anyone have an idea what is wrong? The problem is relevant for me as I get no information when battery status low and the computer runs out of energy without shutdown or hibernation. This is really a problem for me!

    Read the article

  • Thick models Vs. Business Logic, Where do you draw the distinction?

    - by TokenMacGuy
    Today I got into a heated debate with another developer at my organization about where and how to add methods to database mapped classes. We use sqlalchemy, and a major part of the existing code base in our database models is little more than a bag of mapped properties with a class name, a nearly mechanical translation from database tables to python objects. In the argument, my position was that that the primary value of using an ORM was that you can attach low level behaviors and algorithms to the mapped classes. Models are classes first, and secondarily persistent (they could be persistent using xml in a filesystem, you don't need to care). His view was that any behavior at all is "business logic", and necessarily belongs anywhere but in the persistent model, which are to be used for database persistence only. I certainly do think that there is a distinction between what is business logic, and should be separated, since it has some isolation from the lower level of how that gets implemented, and domain logic, which I believe is the abstraction provided by the model classes argued about in the previous paragraph, but I'm having a hard time putting my finger on what that is. I have a better sense of what might be the API (which, in our case, is HTTP "ReSTful"), in that users invoke the API with what they want to do, distinct from what they are allowed to do, and how it gets done. tl;dr: What kinds of things can or should go in a method in a mapped class when using an ORM, and what should be left out, to live in another layer of abstraction?

    Read the article

  • Monitor not detected after booting without monitor attached (12.04)

    - by cawkie
    I had a stable 12.04 machine running perfectly. The machine was booted without the monitor connected - since then the system always boots to low graphics mode. Onboard graphics (from lspci): VGA compatible controller: Intel Corporation 4 Series Chipset Integrated Graphics Controller (rev 03) Monitor: AOC e2450Swh Display widget shows monitor as laptop(!?) and system details shows graphics as Gallium 0.4 on llvmpipe (LLVM 0x300) X-server log appears to show correct monitor detected. When I boot from a live CD I get full 3d graphics I've tried the monitor on a different machine - all OK. I've tried a different monitor on this machine - same problem. Between having a working system and a broken one there have been no updates and I have made no configuration changes... EDIT: I have come to the conclusion that the problem is caused by a known issue with lightDM hanging on battery check. I've managed to get 3D graphic working by switching to using GDM - not a solution but acceptable workaround. I would still like to know what is causing the problem and how I managed to get my system into this state!

    Read the article

  • Access Violation when trying to bind Vertex Object Array

    - by Paul
    I've just started digging into OpenGL and I've run into a problem trying to set a VOA. It's giving me a run-time error of : An unhandled exception of type 'System.AccessViolationException' At // Create and bind a VAO GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); I have searched the internet high and low for a solution and I haven't found one. The rest of my function looks like this: int main(array<System::String ^> ^args) { // Initialise GLFW if( !glfwInit() ) { fprintf( stderr, "Failed to initialize GLFW\n" ); return -1; } glfwOpenWindowHint(GLFW_FSAA_SAMPLES, 0); // 4x antialiasing glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 3); // We want OpenGL 3.3 glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 3); glfwOpenWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); //We don't want the old OpenGL // Open a window and create its OpenGL context if( !glfwOpenWindow( 800, 600, 0,0,0,0, 32,0, GLFW_WINDOW ) ) { fprintf( stderr, "Failed to open GLFW window\n" ); glfwTerminate(); return -1; } // Initialize GLEW if (glewInit() != GLEW_OK) { fprintf(stderr, "Failed to initialize GLEW\n"); return -1; } glfwSetWindowTitle( "Game Engine" ); // Create and bind a VAO GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); glfwEnable( GLFW_STICKY_KEYS );

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >