Search Results

Search found 22841 results on 914 pages for 'aspect orientated program'.

Page 559/914 | < Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >

  • Variables in static library are never initialized. Why?

    - by Coyote
    I have a bunch of variables that should be initialized then my game launches, but must of them are never initialized. Here is an example of the code: MyClass.h class MyClass : public BaseObject { DECLARE_CLASS_RTTI(MyClass, BaseObject); ... }; MyClass.cpp REGISTER_CLASS(MyClass) Where REGISTER_CLASS is a macro defined as follow #define REGISTER_CLASS(className)\ class __registryItem##className : public __registryItemBase {\ virtual className* Alloc(){ return NEW className(); }\ virtual BaseObject::RTTI& GetRTTI(){ return className::RTTI; }\ }\ \ const __registryItem##className __registeredItem##className(#className); and __registryItemBase looks like this: class __registryItemBase { __registryItemBase(const _string name):mName(name){ ClassRegistry::Register(this); } const _string mName; virtual BaseObject* Alloc() = 0; virtual BaseObject::RTTI& GetRTTI() = 0; } Now the code is similar to what I currently have and what I have works flawlessly, all the registered classes are registered to a ClassManager before main(...) is called. I'm able to instantiate and configure components from scripts and auto-register them to the right system etc... The problem arrises when I create a static library (currently for the iPhone, but I fear it will happen with android as well). In that case the code in the .cpp files is never registered. Why is the resulting code not executed when it is in the library while the same code in the program's binary is always executed? Bonus questions: For this to work in the static library, what should I do? Is there something I am missing? Do I need to pass a flag when building the lib? Should I create another structure and init all the __registeredItem##className using that structure?

    Read the article

  • The premier support for Sun Cluster 3.1 ended

    - by JuergenS
    In October 2011 the premier support for Sun Cluster 3.1 ended. See details in Oracle Lifetime Support Policy for Oracle and Sun System Software document. There no 'Extended Support' and the 'Sustaining Support Ends' is indefinite. But for indefinite 'Sustaining Support' I like to point out from the mentioned document (version Sept. 2011) on page 5: Sustaining Support does NOT include: * New program updates, fixes, security alerts, general maintenance releases, selected functionality releases and documentation updates or upgrade tools * Certification with most new third-party products/versions and most new Oracle products * 24 hour commitment and response guidelines for Severity 1 service requests *Previously released fixes or updates that Oracle no longer supports This means Solaris 10 9/10 update9 is the last qualified release for Sun Cluster 3.1. So, Sun Cluster 3.1 is not qualified on Solaris 10 8/11 Update10. Furthermore there is an issue around with SVM patch 145899-06 or higher. This SVM patch is part of Solaris 10 8/11 Update10. The 145899-06 is the first released patch of this number, therefore the support for Sun Cluster 3.1 ends with the previous SVM patches 144622-01 and 139967-02. For details about the known problem with SVM patch 145899-06 please refer to doc 1378828.1. Further this means you should freeze (no patching, no upgrade) your Sun Cluster 3.1 configuration not later than Solaris 10 9/10 update9. Or even better plan an upgrade to Solaris Cluster 3.3 now to get back to full support.

    Read the article

  • flat files vs. RDBMS database, few read/writes, few changes

    - by Bob Lapique
    I have to handle data from long term (years, decades) climate monitoring stations. The data flow usually starts with raw data (voltages, etc.) plus quality check information (pressure, temperature, flow rate, etc.) generally recorded @ 1Hz. Then, the data are assigned a quality flag (human and/or program), processed (apply calibration curves) and flagged. So, we basically end up with 2 datasets : raw and processed data. New data are typically added once a day (~500Ko/day/instrument). Simultaneous queries are not likely to ever happen. I wanted to go for a RDBMS (we have a MySQL server) and have some experience in database design, but the IT guy keeps telling me that flat files will to the job just as well. I suspect him to try to make his life easier when it comes to backup/upgrade the MySQL. There are not so many links between data, they don't change much, but the quality flags will change. A RDBMS is easier to compare data from different instruments on a "many days" scale, compared to daily text files. Well, what would you advise ? Thanks.

    Read the article

  • Windows xp problems

    - by anca
    I had problems with web virus cake 3.00 and I downloaded Malwarebytes Anti-WALWARE, I got rid of that virus but I did other problems. When I open my PC appears to skype error "exception in module Skype.exe at 0013CE7D.The EOleSysError RPC server is unavailable" and another one "Error in C: \ WINDOWS \ system32 \ Nvcpl.Dll Missing entry: NnStartup" when I install / uninstall a program says "Windows Installer service Could you not be accessed. This CAN occure if you are running Windows in safe mode, or if the Windows Installer is not corectly installes. Contact your support personnel for assistance. I have tried many "solutions" but nothing worked. Please HELP! It windows xp sp 2

    Read the article

  • I want to be a programmer, work in corporate environment, earn well, learn fast and eventually become a great programmer [on hold]

    - by Shin San
    I'll try to keep this simple: I'm 29, been dabbling with computers for the past 10 years, had entry level jobs in tech support for different apps, been fixing computers for a while and now want to specialize in something. I'm not 100% stranger to programming but haven't gone past if/then/else with anything. A bit of JavaScript, PHP, Python and currently checking out the "SELECT" statement in SQL :)) I'm curious about programming, I enjoy it and I'm thinking of making a living out of it. So, while I'm at it, why not earn a bit more than the average Joe? So, that's why I'm checking what the best solution, the best learning path and the most useful languages are considering: a) how easy/fast can you find a job by knowing it b) how much would I be able to earn c) how fast can I learn it By reading 10-20 articles online I've come up with an example, but I'm here for some expert advice. Example: * ratings from a) and b) point of view #1 sql ; #2 java ; #3 html (please don't start the markup language debate) ; #4 javascript From this ratings, I'd say a good way to go is learn html/css/(javascript or php) for the web part of apps, some SQL/MySQL/whateverSQL for holding data and loads of Java for the program itself. Please let me know if this is a good idea and if so, what should be the order for learning all of the above. Else, please let me know a better way and why it would be better. Many thanks for taking the time to read my question. Best wishes to you guys Edit: if I think Java + SQL + HTML&JavaScript is the way to go, does the order I'm learning them in matter? Or can I try to learn them all at once?

    Read the article

  • Masters vs. PhD - long [closed]

    - by Sterling
    I'm 21 years old and a first year master's computer science student. Whether or not to continue with my PhD has been plaguing me for the past few months. I can't stop thinking about it and am extremely torn on the issue. I have read http://www.cs.unc.edu/~azuma/hitch4.html and many, many other masters vs phd articles on the web. Unfortunately, I have not yet come to a conclusion. I was hoping that I could post my ideas about the issue on here in hopes to 1) get some extra insight on the issue and 2) make sure that I am correct in my assumptions. Hopefully having people who have experience in the respective fields can tell me if I am wrong so I don't make my decision based on false ideas. Okay, to get this topic out of the way - money. Money isn't the most important thing to me, but it is still important. It's always been a goal of mine to make 6 figures, but I realize that will probably take me a long time with either path. According to most online salary calculating sites, the average starting salary for a software engineer is ~60-70k. The PhD program here is 5 years, so that's about 300k I am missing out on by not going into the workforce with a masters. I have only ever had ~1k at one time in my life so 300k is something I can't even really accurately imagine. I know that I wouldn't have at once obviously, but just to know I would be earning that is kinda crazy to me. I feel like I would be living quite comfortably by the time I'm 30 years old (but risk being too content too soon). I would definitely love to have at least a few years of my 20s to spend with that kind of money before I have a family to spend it all on. I haven't grown up very financially stable so it would be so nice to just spend some money…get a nice car, buy a new guitar or two, eat some good food, and just be financially comfortable. I have always felt like I deserved to make good money in my life, even as a kid growing up, and I just want to have it be a reality. I know that either path I take will make good money by the time I'm ~40-45 years old, but I guess I'm just sick of not making money and am getting impatient about it. However, a big idea pushing me towards a PhD is that I feel the masters path would give me a feeling of selling out if I have the capability to solve real questions in the computer science world. (pretty straight-forward - not much to elaborate on, but this is a big deal) Now onto other aspects of the decision. I originally got into computer science because of programming. I started in high school and knew very soon that it was what I wanted to do for a career. I feel like getting a masters and being a software engineer in the industry gives me much more time to program in my career. In research, I feel like I would spend more time reading, writing, trying to get grant money, etc than I would coding. A guy I work with in the lab just recently published a paper. He showed it to me and I was shocked by it. The first two pages was littered with equations and formulas. Then the next page or so was followed by more equations and formulas that he derived from the previous ones. That was his work - breaking down and creating all of these formulas for robotic arm movement. And whenever I read computer science papers, they all seem to follow this pattern. I always pictured myself coding all day long…not proving equations and things of that nature. I know that's only one part of computer science research, but that part bores me. A couple cons on each side - Phd - I don't really enjoy writing or feel like I'm that great at technical writing. Whenever I'm in groups to make something, I'm always the one who does the large majority of the work and then give it to my team members to write up a report. Presenting is different though - I don't mind presenting at all as long as I have a good grasp on what I am presenting. But writing papers seems like such a chore to me. And because of this, the "publish or perish" phrase really turns me off from research. Another bad thing - I feel like if I am doing research, most of it would be done alone. I work best in small groups. I like to have at least one person to bounce ideas off of when I am brainstorming. The idea of being a part of some small elite group to build things sounds ideal to me. So being able to work in small groups for the majority of my career is a definite plus. I don't feel like I can get this doing research. Masters - I read a lot online that most people come in as engineers and eventually move into management positions. As of now, I don't see myself wanting to be a part of management. Lets say my company wanted to make some new product or system - I would get much more pride, enjoyment, and overall satisfaction to say "I made this" rather than "I managed a group of people that made this." I want to be a big part of the development process. I want to make things. I think it would be great to be more specialized than other people. I would rather know everything about something than something about everything. I always have been that way - was a great pitcher during my baseball years, but not so good at everything else, great at certain classes in school, but not so good at others, etc. To think that my career would be the same way sounds okay to me. Getting a PhD would point me in this direction. It would be great to be some guy who is someone that people look towards and come to ask for help because of being such an important contributor to a very specific field, such as artificial neural networks or robotic haptic perception. From what I gather about the software industry, being specialized can be a very bad thing because of the speed of the new technology. I When it comes to being employed, I have pretty conservative views. I don't want to change companies every 5 years. Maybe this is something everyone wishes, but I would love to just be an important person in one company for 10+ (maybe 20-25+ if I'm lucky!) years if the working conditions were acceptable. I feel like that is more possible as a PhD though, being a professor or researcher. The more I read about people in the software industry, the more it seems like most software engineers bounce from company to company at rapid paces. Some even work like a hired gun from project to project which is NOT what I want AT ALL. But finding a place to make great and important software would be great if that actually happens in the real world. I'm a very competitive person. I thrive on competition. I don't really know why, but I have always been that way even as a kid growing up. Competition always gave me a reason to practice that little extra every night, always push my limits, etc. It seems to me like there is no competition in the research world. It seems like everyone is very relaxed as long as research is being conducted. The only competition is if someone is researching the same thing as you and its whoever can finish and publish first (but everyone seems to careful to check that circumstance). The only noticeable competition to me is just with yourself and your own discipline. I like the idea that in the industry, there is real competition between companies to put out the best product or be put out of business. I feel like this would constantly be pushing me to be better at what I do. One thing that is really pushing me towards a PhD is the lifetime of the things you make. I feel like if you make something truly innovative in the industry…just some really great new application or system…there is a shelf-life of about 5-10 years before someone just does it faster and more efficiently. But with research work, you could create an idea or algorithm that last decades. For instance, the A* search algorithm was described in 1968 and is still widely used today. That is amazing to me. In the words of Palahniuk, "The goal isn't to live forever, its to create something that will." Over anything, I just want to do something that matters. I want my work to help and progress society. Seriously, if I'm stuck programming GUIs for the next 40 years…I might shoot myself in the face. But then again, I hate the idea that less than 1% of the population will come into contact with my work and even less understand its importance. So if anything I have said is false then please inform me. If you think I come off as a masters or PhD, inform me. If you want to give me some extra insight or add on to any point I made, please do. Thank you so much to anyone for any help.

    Read the article

  • Are there any concrete examples of where a paralellizing compiler would provide a value-adding benefit?

    - by jamie
    Paul Graham argues that: It would be great if a startup could give us something of the old Moore's Law back, by writing software that could make a large number of CPUs look to the developer like one very fast CPU. ... The most ambitious is to try to do it automatically: to write a compiler that will parallelize our code for us. There's a name for this compiler, the sufficiently smart compiler, and it is a byword for impossibility. But is it really impossible? Can someone provide a concrete example where a paralellizing compiler would solve a pain point? Web-apps don't appear to be a problem: just run a bunch of Node processes. Real-time raytracing isn't a problem: the programmers are writing multi-threaded, SIMD assembly language quite happily (indeed, some might complain if we make it easier!). The holy grail is to be able to accelerate any program, be it MySQL, Garage Band, or Quicken. I'm looking for a middle ground: is there a real-world problem that you have experienced where a "smart-enough" compiler would have provided a real benefit, i.e that someone would pay for? A good answer is one where there is a process where the computer runs at 100% CPU on a single core for a painful period of time. That time might be 10 seconds, if the task is meant to be quick. It might be 500ms if the task is meant to be interactive. It might be 10 hours. Please describe such a problem. Really, that's all I'm looking for: candidate areas for further investigation. (Hence, raytracing is off the list because all the low-hanging fruit have been feasted upon.) I am not interested in why it cannot be done. There are a million people willing to point to the sound reasons why it cannot be done. Such answers are not useful.

    Read the article

  • Installing Collective Access

    - by Michele
    I am VERY new to installing any type of server program and to running any opensource type software in general. I am running Windows Server 2008R2. I want to install Collective Access to run locally only on my Intranet at home. So my host is localhost I sucessfully installed PHP and MYSQL. I installed CA in this directory C:/inetpub/wwwroot/collectiveaccess. 1st. I do not want to send mail through collective access. Will it install without all the email information? Can I comment those requirements out in the global config and setup.php file? 2nd I am getting the error. Configuration file is missing for hostname 'localhost' this is what I have in the set up file: define("CA_WEB_ROOT_DIR", "c:inetpub/wwwroot"); define("CA_URL_ROOT", "/collectiveaccess"); define("CA_SITE_HOSTNAME", "localhost"); define("CA_DB_HOST", 'localhost');

    Read the article

  • How to get Unity working on dual gpu laptop?

    - by Mourgos
    I recently bought a new laptop (Asus X53S series) that has two GPUs, an NVidia GeForce GT 540M and an integrated Intel GPU (I believe it's called Intel HD graphics 3000). I installed the recommended restricted NVidia drivers after a clean Ubuntu 11.10 install. In the 'additional drivers' program I get the message: "This driver is enabled and in use", although when I try to open the NVidia X Server Settings it says "You do not appear to be using the NVIDIA X driver." which seems to be the case since Ubuntu only starts using Unity 2D. I've had the same issue in 11.04 and I was forced to use the nouveau driver just to get unity working, but since I get quite a few crashes with it I really want to get the propriety driver working this time around. Since I've never had this issue with older laptops, I can only assume it is caused due to the dual gpu configuration. How do I get Ubuntu to use the propriety drivers, or is there any workaround to get the integrated Intel GPU to be ignored by Ubuntu? Alternatively, has anyone got Unity working with a similar setup?

    Read the article

  • Most effective work habit for coding? [on hold]

    - by Cris
    Working on a big solo project (~15,000 LOC), I am encountering the following phenomenon: I seem to work best when I program in short bursts of 10-15 minutes. Right now I am working on a section which is a complete first time for me architecturally and if I have any architectural issues that emerge when doing the implementation, I seem to be able to best serve these by taking a total break. Then, later, sketching out the ideas on some paper. And when I feel I have sufficient clarity, then going back to code. This iterates until that architectural issue for that section is resolved. This seems quite counter intuitive: that I can progress more quickly by coding less, and taking more breaks. I am nearing the end of the sections which are "first times" for me, and about to dive into stuff which I am much more familiar and am wondering if this counter intuitive efficiency will continue. So my question is: even for regular coding of sections one is familiar with, which don't require constant re-clarification of the best architecture, is more progress to be attained by taking more breaks and coding in bursts?

    Read the article

  • Dye Sub printer with specific prints remaining - can I command-line query this?

    - by Jason N
    Hey, I've got a Sony Dye-Sub printer that holds ink/paper sets - i.e. a very certain amount of ink and paper for ~200 prints. This information is available to me from within Control Panel Printers Preferences Printer Device Information (i.e. current 189 remaining prints). Any way I can perhaps get this information from the command line? I'd like to write a little program to tell me when the number of prints gets low (i.e. < 20), rather than suffer the annoying Windows "run out of paper" popup. I've found the Windows VBScript print utilities, but can't seem to find the request I need for this. Any suggestions? Jason

    Read the article

  • Mailer Daemon greeting failed

    - by Xelluloid
    I wrote a tool that sends automated mails to a couple of addresses. This worked for a couple of weeks. Now since yesterday I get Mailer-Daemon responses like this Hi. This is the qmail-send program at test.test2.net. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. testuser@domain.com: Connected to 123.456.789.10 but greeting failed. Remote host said: 554 foo.bar.com I'm not going to try again; this message has been in the queue too long. Does someone have an idea what I can do now?

    Read the article

  • How come I can't ping my home computer?

    - by bikefixxer
    I'm trying to set up a vpn into my home computer in order to access files from wherever. I have the home computer set up with a no-ip dynamic dns program so I can always connect, and have also tried using the actual ip address. However, when I try to connect or even ping from anywhere outside of my house I can't get through. I've tried putting that particular computer in the dmz, turned off the computers firewall and anti-virus, and I still don't get anything. I have comcast as my home internet provider. I have also tried from two different locations. Are there any other solutions I can try or is comcast the issue? I used to be able to do this when I ran a small web server at home for fun but now nothing works. Thanks in advance for any suggestions!

    Read the article

  • Real-time stock market application

    - by Sam
    I'm an amateur programmer. I'd like to develop a software application (like Tradestation), to analyse real-time market data. Please teach me if the following approach is correct, ie the procedures, knowledge or software needed etc: Use a DB to read the real-time feed from data provider: what should be the right DB to use? I know it should be a time serious one. Can I use SQL, Mysql, or others? What database can receive real-time data feed? Do I need to configure the DB to do this? If the real-time data is in ASCII form, how can it be converted to those that can be read by the DB and my application? Should I have to write codes or just use some add-ins? What kind of add-in are needed? How should I code the program to retrieve the changing data from the DB so that the analysis software screen data can also change asynchronously? (like the RTD in excel) Which aspects of programming do I need to learn to develop the above? Are there web resources/ books I can refer to for more information?

    Read the article

  • why is glVertexAttribDivisor crashing?

    - by 2am
    I am trying to render some trees with instancing. This is rather weird, but before sleeping yesterday night, I checked the code, and it was in a running state, when I got up this morning, it is crashing when I am calling glVertexAttribDivisor I haven't changed any code since yesterday. Here is how I am sending data to GPU for instancing. glGenBuffers(1, &iVBO); glBindBuffer(GL_ARRAY_BUFFER, iVBO); glBufferData(GL_ARRAY_BUFFER, (ml_instance->i_positions.size()*sizeof(glm::vec4)) , NULL, GL_STATIC_DRAW); glBufferSubData(GL_ARRAY_BUFFER, 0, (ml_instance->i_positions.size()*sizeof(glm::vec4)), &ml_instance->i_positions[0]); And then in vertex specification-- glBindBuffer(GL_ARRAY_BUFFER, iVBO); glVertexAttribPointer(i_positions, 4, GL_FLOAT, GL_FALSE, 0, 0); glEnableVertexAttribArray(i_positions); glVertexAttribDivisor(i_positions,1); // **THIS IS WHERE THE PROGRAM CRASHES** glDrawElementsInstanced(GL_TRIANGLES, indices.size(), GL_UNSIGNED_INT, 0,TREES_INSTANCE_COUNT); I have checked ml_instance->i_positions, it has all the data that needs to render. I have checked the value of i_positions in vertex shader, it is the same as whatever I have defined there. I am little out of ideas here, everything looks pretty much fine. What am I missing?

    Read the article

  • Generalist Languages: Dying or Alive and Well?

    - by dsimcha
    Around here, it seems like there's somewhat of a consensus that generalist programming languages (that try to be good at everything, support multiple paradigms, support both very high- and very low-level programming), etc. are a bad idea, and that it's better to pick the right tool for the job and use lots of different languages. I see three major areas where this is flawed: Interfacing multiple languages is always at least a source of friction and is sometimes practically impossible. How severe a problem this is depends on how fine-grained the interfacing is. Near the boundary between the two languages, though, you're basically limited to the intersection of their features, and you have to care about things like binary interfaces that you usually wouldn't. Passing complex data structures (i.e. not just primitives and arrays of primitives) between languages is almost always a hassle. Furthermore, shifting between different syntaxes, different conventions, etc. can be confusing and annoying, though this is a fairly minor complaint. Requirements are never set in stone. I hate picking a language thinking it's the right tool for the job, then realizing that, when some new requirement surfaces, it's actually a terrible choice for that requirement. This has happened to me several times before, usually when working with languages that are very slow, very domain specific and/or has very poor concurrency/parallelism support. When you program in a language for a while, you start to build up a personal toolbox of small utility functions/classes/programs. The value of these goes drastically down if you're forced to use a different language than the one you've accumulated all this code in. What am I missing here? Why shouldn't more focus be placed on generalist languages? Are generalist languages as a category dying or alive and well?

    Read the article

  • Security Audit Failures in Event Viewer Windows Server 2008R2

    - by Jacob
    When I am looking at the security tab of my event viewer on a Windows Server 2008 R2, I am showing a ton of Audit Failures with Event ID 4776. The computer attempted to validate the credentials for an account. Authentication Package: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 Logon Account: randy Source Workstation: HPDB1 Error Code: 0xc0000064 I verified the account "randy" exist in my Active Directory. From my understanding, there has not been any recent password changes. Is there any way to get detailed information on this error? I am wondering what program is requesting this information. Also, is there any way to clear this error up? I was thinking about resetting the password and changing it back to the original.

    Read the article

  • Video game "Gish" will only launch from command line

    - by aberration
    Platform: Lubuntu 11.10 x64 Program: Gish When I try to launch Gish from the command line (/opt/gish/gi.sh), there are no problems. But when I try to launch it from the LXDE menu, it will not start. Contents of /usr/share/applications/gish.desktop: [Desktop Entry] Categories=Game;ActionGame;AdventureGame;ArcadeGame; Exec=/opt/gish/gi.sh Path=/opt/gish Icon=x-gish Terminal=false Type=Application Name=Gish I tried changing Terminal=false to Terminal=true to debug it, but then I just got a blank terminal, and the game didn't start. Edit: Here is some additional information, as requested by Eliah Kagan below: I tried editing /usr/share/applications/gish.desktop, as recommended, but it had no effect However, ~/.xsession-errors contained the following error: [: 8: x86_64: unexpected operator ./gish_32: error while loading shared libraries: libGL.so.1: wrong ELF class: ELFCLASS64 I think there's a problem with the /opt/gish/gi.sh shell script. This is its contents: cd /opt/gish/ MACHINE_TYPE=`uname -m` if [ ${MACHINE_TYPE} == 'x86_64' ]; then ./gish_64 else ./gish_32 fi I'm not too familiar with Bash, so hopefully someone else can point out the error. I have a 64-bit machine. I think that when the script is run from the command line, it's properly launching the 64-bit version (/opt/gish/gish_64), but when it's run from the LXDE menu, it's launching the 32-bit version (/opt/gish/gish_32), which is causing the libGL.so.1 error. However, this may be related to my libGL.so.1 problems with 2 other games.

    Read the article

  • ERROR: 6553609 You are not authorized to perform the current operation

    - by Tim
    I'm trying to backup a sub site in my protal using Smigrate. When logged into the server and running the following command: D:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\60\BINsmigrate -w -f C:\CH_Test_04_backup.fwp -u domain\user -pw ** 20 Nov 2009 13:57:06 -0000 Site collection opened 20 Nov 2009 13:57:07 -0000 Authenticating against the web server. 20 Nov 2009 13:57:07 -0000 Already authenticated against the web server. 20 Nov 2009 13:57:07 -0000 ERROR (possibly fatal): ERROR: 6553609 You are not authorized to perform the current operation. 20 Nov 2009 13:57:07 -0000 Site collection closed I get an error (ERROR: 6553609 You are not authorized to perform the current operation) and the sites does not get backed up. I've tried solutions oulined in various KB articles but had no luck with them. Can anyone help? Regards Tim

    Read the article

  • Cannot install Ubuntu on an Acer Aspire One 756

    - by Byron807
    I have used Ubuntu before, in virtual machines, but today I decided to make the leap and I bought a netbook to install Ubuntu as a "real" OS alongside Windows. The netbook I bought is an Acer Aspire One 756, with a 64-bit Intel processor, 4GB RAM, and Windows 8 as the default OS. I have now encountered several obstacles that actually prevent me from installing Ubuntu 12.10. Here are all the things I have tried so far: Used a live CD, in combination with a USB DVD drive. (I should point out that the Aspire One does not have an optical drive.) The computer does not boot in Ubuntu; the drive keeps spinning, but nothing happens, even though I changed the boot order in the BIOS. Used a USB drive created via the tool available on pendrivelinux.com. Again, I've made changes to the BIOS to make sure the computer tries to boot from USB before using the built-in HDD. The results vary in this case: sometimes, the computer keeps rebooting like crazy until I remove the USB drive, at which point the computer boots into Windows 8, as expected. If I use a different USB drive, I get an error message that says that the USB drive has been blocked due to "the current security policy". Tried to install Ubuntu via Wubi. The program appears to install something, but at some point during the installation process, I get a non-specified error message and nothing else happens. I am not sure if these are known issues; in any case, searching the forum has not yielded any results, so I thought I should simply describe my problem here in the hope that this question has not been answered before. I would greatly appreciate any help with this annoying problem. Of course, if anything is unclear, do not hesitate to ask for further details.

    Read the article

  • Mavericks Dock Lag

    - by Syle
    Im having an weird bug on my MacBook Pro. When I start the machine, the dock have some graphic lag, the same when I open and close windows. The solution is simple, just open the AutoCAD and close it and the bug is gone. The same if I open AutoDesk Maya or other similar 3D program. My question where is not how to fix it, but why it happens. Its like Mavericks on the start thinks: Well you can only use a little memory of the graphic card! But if i open AutoCad, its like: Oh you are opening AutoCad so its better starting boosting the graphic card and giving you more memory. I bet there is something that is forcing Mavericks to think like this. Any help is welcome! =)

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • Dell Media Direct is rebooting my machine when it goes into sleep mode

    - by wsanville
    I've got a Dell studio 1535 laptop, which shipped w/ Vista 32 bit. I've since formatted and installed Win 7 64. Everything has been fine for months, but recently, every time I leave my machine unattended and it goes to sleep, it wakes with the Dell Media Direct splash screen, and then goes to the "Windows was not shut down properly..." dialog that asks if you want to boot safe mode/start Windows normally/etc. The stupid button is also stuck on currently, but even when it is off, the problem still occurs. From the searching I've done, I've learned that the program is installed on its own partition, but I'm fairly certain I formatted everything (see screenie of my partitions: http://www.engr.uconn.edu/~wsj05001/misc/partitions.png). How can I stop the madness?

    Read the article

  • C# 4.0 Optional/Named Parameters Beginner&rsquo;s Tutorial

    - by mbcrump
    One of the interesting features of C# 4.0 is for both named and optional arguments.  They are often very useful together, but are quite actually two different things.  Optional arguments gives us the ability to omit arguments to method invocations. Named arguments allows us to specify the arguments by name instead of by position.  Code using the named parameters are often more readable than code relying on argument position.  These features were long overdue, especially in regards to COM interop. Below, I have included some examples to help you understand them more in depth. Please remember to target the .NET 4 Framework when trying these samples. Code Snippet using System;   namespace ConsoleApplication3 {     class Program     {         static void Main(string[] args)         {               //C# 4.0 Optional/Named Parameters Tutorial               Foo();                              //Prints to the console | Return Nothing 0             Foo("Print Something");             //Prints to the console | Print Something 0             Foo("Print Something", 1);          //Prints to the console | Print Something 1             Foo(x: "Print Something", i: 5);    //Prints to the console | Print Something 5             Foo(i: 5, x: "Print Something");    //Prints to the console | Print Something 5             Foo("Print Something", i: 5);       //Prints to the console | Print Something 5             Foo2(i3: 77);                       //Prints to the console | 77         //  Foo(x:"Print Something", 5);        //Positional parameters must come before named arguments. This will error out.             Console.Read();         }           static void Foo(string x = "Return Nothing", int i = 0)         {             Console.WriteLine(x + " " + i + Environment.NewLine);         }           static void Foo2(int i = 1, int i2 = 2, int i3 = 3, int i4 = 4)         {             Console.WriteLine(i3);         }     } }

    Read the article

  • Move a 2D square on y axis on android GLES2

    - by Dan
    I am trying to create a simple game for android, to start i am trying to make the square move down the y axis but the way i am doing it dosent move the square at all and i cant find any tutorials for GLES20 The on draw frame function in the render class updates the users position based on accleration dew to gravity, gets the transform matrix from the user class which is used to move the square down, then the program draws it. All that happens is that the square is drawn, no motion happens public void onDrawFrame(GL10 gl) { user.update(0.0, phy.AccelerationDewToGravity); GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT); // Re draws black background GLES20.glVertexAttribPointer(maPositionHandle, 3, GLES20.GL_FLOAT, false, 12, user.SquareVB);//triangleVB); GLES20.glEnableVertexAttribArray(maPositionHandle); GLES20.glUniformMatrix4fv(maPositionHandle, 1, false, user.getTransformMatrix(), 0); GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4); } The update function in the player class is public void update(double vh, double vv) { Vh += vh; // Increase horrzontal Velosity Vv += vv; // Increase vertical velosity //Matrix.translateM(mMMatrix, 0, (int)Vh, (int)Vv, 0); Matrix.translateM(mMMatrix, 0, mMMatrix, 0, (float)Vh, (float)Vv, 0); }

    Read the article

< Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >