Search Results

Search found 20668 results on 827 pages for 'last modified'.

Page 395/827 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • An adequate message authentication code for REST

    - by Andras Zoltan
    My REST service currently uses SCRAM authentication to issue tokens for callers and users. We have the ability to revoke caller privileges and ban IPs, as well as impose quotas to any type of request. One thing that I haven't implemented, however, is MAC for requests. As I've thought about it more, for some requests I think this is needed, because otherwise tokens can be stolen and before we identify this and deactivate the associated caller account, some damage could be done to our user accounts. In many systems the MAC is generated from the body or query string of the request, however this is difficult to implement as I'm using the ASP.Net Web API and don't want to read the body twice. Equally importantly I want to keep it simple for callers to access the service. So what I'm thinking is to have a MAC calculated on: the url, possibly minus query string the verb the request ip (potentially is a barrier on some mobile devices though) utc date and time when the client issues the request. For the last one I would have the client send that string in a request header, of course - and I can use it to decide whether the request is 'fresh' enough. My thinking is that whilst this doesn't prevent message body tampering it does prevent using a model request to use as a template for different requests later on by a malicious third party. I believe only the most aggressive man in the middle attack would be able to subvert this, and I don't think our services offer any information or ability that is valuable enough to warrant that. The services will use SSL as well, for sensitive stuff. And if I do this, then I'll be using HMAC-SHA-256 and issuing private keys for HMAC appropriately. Does this sound enough? Have I missed anything? I don't think I'm a beginner when it comes to security, but when working on it I always. am shrouded in doubt, so I appreciate having this community to call upon!

    Read the article

  • Screen Aspect Ratio

    - by Bill Evjen
    Jeffrey Dean, Pixar Aspect Ratio is very important to home video. What is aspect ratio – the ratio from the height to the width 2.35:1 The image is 2.35 times wide as it is high Pixar uses this for half of our movies This is called a widescreen image When modified to fit your television screen They cut this to fit the box of your screen When a comparison is made huge chunks of picture is missing It is harder to find what is going on when these pieces are missing The whole is greater than the pieces themselves. If you are missing pieces – you are missing the movie The soul and the mood is in the film shots. Cutting it to fit a screen, you are losing 30% of the movie Why different aspect ratios? Film before the 1950s 1.33:1 Academy Standard There were all aspects of images though. There was no standard. Thomas Edison developed projecting images onto a wall/screen He didn’t patent it as he saw no value in it. Then 1.37:1 came about to add a strip of sound This is the same size as a 35mm film Around 1952 – TV comes along NTSC Television followed the Academy Standard (4x3) Once TV came out, movie theater attendance plummets So Film brought forth color to combat this. Also early 3D Also Widescreen was brought forth. Cinema-Scope Studios at the time made movies bigger and bigger There was a Napoleon movie that was actually 4x1 … really wide. 1.85:1 Academy Flat 2.35:1 Anamorphic Scope (aka Panavision/Cinemascope) Almost all movies are made in these two aspect ratios Pixar has done half in one and half in the other Why choose one over the other? Artist choice It is part of the story the director wants to tell Can we preserve the story outside of the theaters? TVs before 1998 – they were very square Now TVs are very wide Historical options Toy Story released as it was and people cut it in a way that wasn’t liked by the studio Pan and Scan is another option Cut and then scan left or right depending on where the action is Frame Height Pixar can go back and animate more picture to account for the bottom/top bars. You end up with more sky and more ground The characters seem to get lost in the picture You lose what the director original intended Re-staging For animated movies, you can move characters around – restage the scene. It is a new completely different version of the film This is the best possible option that Pixar came up with They have stopped doing this really as the demand as pretty much dropped off Why not 1.33 today? There has been an evolution of taste and demands. VHS is a linear item The focus is about portability and not about quality Most was pan and scan and the quality was so bad – but people didn’t notice DVD was introduced in 1996 You could have more content – two versions of the film You could have the widescreen version and the 1.33 version People realized that they are seeing more of the movie with the widescreen High Def Televisions (16x9 monitors) This was introduced in 2005 Blu-ray Disc was introduced in 2006 This is all widescreen You cannot find a square TV anymore TVs are roughly 1.85:1 aspect ratio There is a change in demand Users are used to black bars and are used to widescreen Users are educated now What’s next for in-flight entertainment? High Def IFE Personal Electronic Devices 3D inflight

    Read the article

  • Can't find synergy config file on Windows

    - by Joel Avery
    New to synergy I connected everything just fine. Both my Windows 7 64bit (server) and 7 32 bit (client) are connected perfectly. However, I can't tell synergy which screen is where because I can't find this config file everyone is talking about. I looked in the root folder of the application, there is no ext/synergy.conf so I went and made one but that isn't working either. Kind of frustrating cause I think its the last step. Anyway any help is much appreciated. I have the newest version from the site but it says version unknown in the application. On Windows it has that cool drag and drop UI to place your screens where you want them but nothing is working for my mouse or keyboard.

    Read the article

  • Cannot login to XP SP3 in VMWarePlayer virtual machine in safe mode

    - by Alper
    Hello, Here is my setup Host OS: XP SP3 Guest OS: XP SP3 (using VMWare) I checked the /SAFEBOOT option in System Config utility in guest for troubleshooting. Now the guest OS boots up in Safe Mode but I cannot login with my user id/password. Here is what I tried: [domainName]\[userid] for user name = Login Fails Administrator with blank password = Login fails Safe Mode with Networking = Login fails Safe Mode with Command Prompt = Login fails Last Known Good Configuration = starts in safe mode, login fails Start Windows Normally = starts in safe mode, login fails Don't have cd to get to the recovery console Any ideas?

    Read the article

  • Network shares do not mount

    - by Alex
    My network shares were mounting fine yesterday.. suddenly they are not. They were mounting fine for the last two weeks or however long since I added them. When I run sudo mount -a I get the following error: topsy@monolyth:~$ sudo mount -a mount error(12): Cannot allocate memory Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) mount error(12): Cannot allocate memory Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) mount error(12): Cannot allocate memory Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) mount error(12): Cannot allocate memory Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) topsy@monolyth:~$ I followed this guide when setting them up: http://ubuntuforums.org/showthread.php?t=288534 So I tried removing them by doing the reverse, and then rebooting, then adding them again and rebooting. Problem persists.

    Read the article

  • JSR Updates - Multiple JSRs migrate to latest JCP version

    - by Heather VanCura
    As part of the JCP.Next reform effort, many JSRs have migrated to the latest version of the JCP program in the last month.  These JSRs' Spec Leads and Expert Groups are contributing to the strides the JCP has been making to enable greater community transparency, participation and agility to the working of the JSR development through the JCP program. Any other JSR Spec Leads interested in migrating to the latest JCP version, now JCP 2.9, as of 13 November, incorporating the Merged Executive Committee (EC), see the Spec Lead Guide for instructions on migrating to the latest JCP version.  For JCP 2.8 JSRs, you are effectively already operating under JCP 2.9 since there are no longer two ECs.  This is the difference for JCP 2.8 JSRs migrating to JCP 2.9 -- a merged EC.  To make the migration official, just inform your Expert Group on a public channel and email your request to admin at jcp.org. JSR 310, Date and Time API, led by Stephen Colebourne and Michael Nascimento and Oracle (Roger Riggs)  JSR 349, Bean Valirdation 1.1, led by RedHat (Emmanuel Bernard) JSR 350, Java State Management, led by Oracle (Mitch Upton) JSR 339, JAX-RS 2.0: The Java API for RESTful Web Services, led by Oracle, (Santiago Pericas-Geertsen and Marek Potociar) JSR 347, Data Grids for the Java Platform, led by RedHat (Manik Surtani)

    Read the article

  • Elo system behaves oddly in program I've created

    - by adc
    Alright, so I'm looking to build a small program (C# and XAML) that, essentially, does this: Generate array of players. Each player has a current rating and a true rating. I set current rating to 1200 as a starting point right now; I've also tried setting it to true rating and the average of the two. True rating is what their skill level actually is. Their true rating is calculated based on percentages from the current League of Legends rating system; generating an array of 970 thousand generates results very similar to the data from here: (removed due to URL limit - but trust me, the results are very similar). This array is of length specified by the user. If need be, sort the array from smallest to largest. Play X number of games, again specified by the user. This is done by taking the array of players (which is sorted by Current Rating after being created) and running through it in groups of 10. The first five are on team one, the second five are on team two. It then takes the True Rating of these players and calculates an expected chance to win using the Elo system. It generates a random double and compares it to the expected chance to win; if the number is lower, team one wins - otherwise team two wins. I then update the rating of the players via, again, the Elo system - giving the winning team a score of 1 and the losing team a score of 0. I use a K value of 36 (but have tried 12, 24, and even higher ones) and an F value of 400. After going through the entire loop of players (which I have conveniently forced to be a multiple of ten), it sorts the array - again via current rating. This, if my understanding of the Elo system is correct, runs properly. However, it doesn't seem to work. I have a running test telling me how many players of the full array are within 100 current rating of their true rating. I would expect some portion of the population to be outside this range (as probability is not always going to go in their favor), but a full 40-45% of the population is outside of this range. I also have it outputting the maximum difference between true and current rating - and I have never seen this drop below 500! It hovers between 550-600, occasionally going over or under. I'm at a loss as to what to change - I've fiddled with the K and F values, where I start all the players, etc. but nothing changes the fact that eventually a good 40% of the population is outside the range. And it isn't that I have it playing too few games - it's now run through over 60 thousand games and the problem never disappears or really fluctuates. The full C# code, including everything except the XAML file and the Player class (pastebin is being very slow and I can only post two links, so I can't link to the XAML file): http://pastebin.com/rFcZRL84 The Player class: http://pastebin.com/4cJTdTRu I guess my question is did I do anything wrong? Is there a problem with the way I implemented the system, or is it just that Riot uses a significantly modified Elo system? I don't think it's the latter, as that still wouldn't explain the massive true and current rating differences to me, however.

    Read the article

  • How to translate along Z axis in OpenTK

    - by JeremyJAlpha
    I am playing around with an OpenGL sample application I downloaded for Xamarin-Android. The sample application produces a rotating colored cube I would simply like to edit it so that the rotating cube is translated along the Z axis and disappears into the distance. I modified the code by: adding an cumulative variable to store my Z distance, adding GL.Enable(All.DepthBufferBit) - unsure if I put it in the right place, adding GL.Translate(0.0f, 0.0f, Depth) - before the rotate functions, Result: cube rotates a couple of times then disappears, it seems to be getting clipped out of the frustum. So my question is what is the correct way to use and initialize the Z buffer and get the cube to travel along the Z axis? I am sure I am missing some function calls but am unsure of what they are and where to put them. I apologise in advance as this is very basic stuff but am still learning :P, I would appreciate it if anyone could show me the best way to get the cube to still rotate but to also move along the Z axis. I have commented all my modifications in the code: // This gets called when the drawing surface is ready protected override void OnLoad (EventArgs e) { // this call is optional, and meant to raise delegates // in case any are registered base.OnLoad (e); // UpdateFrame and RenderFrame are called // by the render loop. This is takes effect // when we use 'Run ()', like below UpdateFrame += delegate (object sender, FrameEventArgs args) { // Rotate at a constant speed for (int i = 0; i < 3; i ++) rot [i] += (float) (rateOfRotationPS [i] * args.Time); }; RenderFrame += delegate { RenderCube (); }; GL.Enable(All.DepthBufferBit); //Added by Noob GL.Enable(All.CullFace); GL.ShadeModel(All.Smooth); GL.Hint(All.PerspectiveCorrectionHint, All.Nicest); // Run the render loop Run (30); } void RenderCube () { GL.Viewport(0, 0, viewportWidth, viewportHeight); GL.MatrixMode (All.Projection); GL.LoadIdentity (); if ( viewportWidth > viewportHeight ) { GL.Ortho(-1.5f, 1.5f, 1.0f, -1.0f, -1.0f, 1.0f); } else { GL.Ortho(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f); } GL.MatrixMode (All.Modelview); GL.LoadIdentity (); Depth -= 0.02f; //Added by Noob GL.Translate(0.0f,0.0f,Depth); //Added by Noob GL.Rotate (rot[0], 1.0f, 0.0f, 0.0f); GL.Rotate (rot[1], 0.0f, 1.0f, 0.0f); GL.Rotate (rot[2], 0.0f, 1.0f, 0.0f); GL.ClearColor (0, 0, 0, 1.0f); GL.Clear (ClearBufferMask.ColorBufferBit); GL.VertexPointer(3, All.Float, 0, cube); GL.EnableClientState (All.VertexArray); GL.ColorPointer (4, All.Float, 0, cubeColors); GL.EnableClientState (All.ColorArray); GL.DrawElements(All.Triangles, 36, All.UnsignedByte, triangles); SwapBuffers (); }

    Read the article

  • Screen flicker -> Severe System Slowdown?

    - by Adam Robinson
    I'm using a Dell D830 laptop, and over the last few weeks it's been developing a very irritating screen flicker problem that leads to the system slowing down almost to the point of unusability. At seemingly random times (no commonality between how long the system has been running, what I was doing, what applications were open, etc.) my screen (I use two external LCD's with the laptop closed in a dock) flickers for a moment, then the system becomes incredibly slow. The screen redraws painfully slowly--almost like what you might expect to see with generic graphics drivers installed--and the entire system is maddeningly unresponsive. The only thing that seems to be able to correct the issue is a restart. I've checked the event logs and nothing out of the ordinary is there, and definitely nothing that's common to all of the events. I'm running XP Pro SP2. Any ideas?

    Read the article

  • Improved Customer Experience, but at what Cost?

    - by Tony Berk
    We can all probably agree that improving your customers' experience is a good thing. But a key question many people are asking is will it help your organization and, in particular, what are the financial benefits?That's a good question, especially when companies ARE experiencing phenomenal return on investment (ROI). Of course, there are many factors that impact ROI or other measures of success, but we'd like to share some success stories as examples of customer experience in action and delivering positive results. If you would like to learn more about the economics of customer experience, see Brian Curran's presentation at the Oracle Customer Experience Summit last month. In this series of blog posts, we'll share actual customer stories. Today's example is Dell, which uses Oracle Real-Time Decisions (RTD) and Siebel CRM as part of their customer experience portfolio to better understand their customers' needs and wants and provide consistent interactions. Regular readers of this blog are probably familiar with Siebel, but RTD may be new to many of you. RTD is a complete decision management solution that delivers real-time decisions and recommendations and automatically renders decisions within a business process to create tailored messaging for every customer interaction.What does that mean? In the video below, Dell describes how customer experience is important not just for one interaction channel, but across all "vehicles." RTD is helping Dell understand customer behavior and communicate with the customer in a more relevant manner, across all communication  or interaction channels including sales and service call centers, email marketing and online. Dell continues to expand use of RTD because the benefits are showing up in sales, service and marketing results including 19% increase in close rates, faster issue resolution and 40% improvement in revenue per click in email marketing. Click here, to learn more about Oracle Customer Experience and stay tuned for more customer spotlights.

    Read the article

  • Start FF with plugins disabled.

    - by justSteve
    Strange problem. Opened FF 3.6.3 after a re-boot and my last opened tabs appeared but none of the page or menu elements would respond to a click. No error messages...just locked up. Started FF with a different profile and it works normally. Next i used FEBE to restore a backup of my working profile and, after the restore, it's doing the same thing - locked out. No newly installed plugins but since a brand new profile works while a restored one doesn't it kinda points to a problematic plug-in. Is there any way to start FF with all plugins disabled? thx

    Read the article

  • Install Previous Version of PHP Package from Debian Testing Using Apt

    - by Metric Scantlings
    Is there a way to install an older Debian testing repository version of a package using apt-get? Specifically, I am looking to install the latest version of PHP 5.2.x on Debian Lenny. The last time I set up an environment, 5.2.12 just happened to be the version in Debian testing. That was perfect, convenient. Now, testing is at 5.3.x which won't work for my purposes, and my attempts at sudo apt-get -t testing install php5=5.2.12* are answered with E: Version '5.2.12*' for 'php5' was not found.

    Read the article

  • Monitoring over Time with Nagios: How?

    - by David
    Nagios in its standard usage monitors with point-in-time checks: either something is - or is not - true. Other tools like SGI's PCP, HP's MeasureWare, and SEC provide monitoring over time - monitoring things like average disk access time over the last five minutes, or other similar items. Is there anything like this for Nagios? I'm already running NDOUtils, which seems like a natural source for such data. I'd like to have something that would monitor and fire off alarms based on a time-based check using historical data. Is there anything like this for Nagios?

    Read the article

  • How to make ssh/rsync/etc use a VLAN network interface?

    - by Annan
    A company I work for has a number of virtual servers with ElasticHosts. They are setup in such a way that eth1 is on a private VLAN connecting them to each other. This is so backups sent between servers are not charged at the same rate as external data transfer. My understanding of how VLANs and network interfaces work is sketchy at best. How can I make ssh, rsync, etc. transfer data through the VLAN? My final solution: I spent a while trying to figure this out, For all servers involved, edit /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 BOOTPROTO=static ONBOOT=yes HWADDR=YOUR_MAC_ADDR IPADDR=192.168.0.100 NETMASK=255.255.255.0 Where HWADDR should already be set and the last octate of IPADDR should be different from each other. Then run, on all servers /etc/init.d/network restart After this the IP addresses specified by IPADDR can be used directly as any other IP address.

    Read the article

  • Can't use nvidia card/driver on optimus notebook

    - by Mr. Pixel
    I installed (once again) the latest official nvidia driver for my GT540m on Ubuntu 11.10. Even though everything seems OK with my xorg.conf file (I've manually added BusID "PCI:1:0:0", since lspci shows 01:00.0 for my GPU). The problem is, when I use the xorg.conf file generated by Xorg -configure, Xorg automatically loads the Intel GPU. So I removed everything that was not related to my nvidia card, basically leaving my xorg.conf with one screen and one device (with the nvidia driver and the above-mentioned BusID), and Xorg fails to start. The log says something like "Devices on GT540m [newline] none" And a few lines later, something like "NVIDIA(0) found a screen, but have no device for it". When I don't set the BusID, it doesn't seem to detect my card either. Thank you for any suggestion. PS: If possible, I'd like to avoid bumblebee or any similar "hybrid graphics" solution, last time I tried I ended up reinstalling Ubuntu. Edit: Allow me to clarify the problem. I have a notebook with a GT540m graphics card, and an integrated intel gpu. I want to use the graphics card with full hardware acceleration and its official driver, as I do under windows.

    Read the article

  • Enabling NAT loopback on HG556a router?

    - by galdikas
    This is one of the standard issue vodafone routers. So i set up web-server on my laptop, and it is accessible to the internet now. However I need to be able to access it from my machine using the public address. But I just cannot find where to enable nat loopback. I looked for options in both regular and advanced user logins (web interfaces). So I suppose the last option is telneting into it, and using commands to do it.. but I don't know how to access it (cant find the credentials). http://rhiggins.sdf-eu.org/blog/index.php?entry=entry110722-164625 In above link it says that i can find this info in configuration file, but anyone could tell me how to access it? And then what commands should I use to enable the NAT loopback?

    Read the article

  • Podcast Show Notes: The Fusion Middleware A-Team and the Chronicles of Architecture

    - by Bob Rhubart
    If you pay any attention at all to the Oracle blogosphere you’ve probably seen one of several blogs published by members of a group known as the Oracle Fusion Middleware A-Team. A new blog, The Oracle A-Team Chronicles, was recently launched that combines all of those separate A-Team blogs in one. In this program you’ll meet some of the people behind the A-team and the creation of that new blog. The Conversation Listen to Part 1: Background on the A-Team - When was it formed? What is it’s mission? 54) What are some of the most common challenges A-Team architects encounter in the field? Listen to Part 2 (July 3): The panel discusses the trends - big data, mobile, social, etc - that are having the biggest impact in the field. Listen to Part 3 (July 10): The panelists discuss the analysts, journalists, and other resources they rely on to stay ahead of the curve as the technology evolves, and reveal the last article or blog post they shared with other A-team members. The Panelists Jennifer Briscoe: Senior Director, Oracle Fusion Middleware A-Team Clifford Musante: Lead Architect, Application Integration Architecture A-Team, webmaster of the A-Team Chronicles Mikael Ottosson: Vice President, Oracle Fusion Apps and Fusion Middleware A-Team and Cloud Applications Group Pardha Reddy: Senior director of Oracle Identity Management and a member of the Oracle Fusion Middleware A-team Coming Soon Data Warehousing and Oracle Data Integrator: Guest producer and Oracle ACE Director Gurcan Orhan selected the topic and panelists for this program, which also features Uli Bethke, Michael Rainey, and Oracle ACE Cameron Lackpour. Java and Oracle ADF Mobile: An impromptu roundtable discussion featuring QCon New York 2013 session speakers Doug Clarke, Frederic Desbiens, Stephen Chin, and Reza Rahman. Stay tuned:

    Read the article

  • Can I list file names (or their parent directories) that were recently deleted using rm in OS X?

    - by Andrew Grimm
    Is it possible to find out which files and directories have recently been deleted by rm in OS X? Or failing that, is it possible to find which parent directories have had files or directories within it deleted? The OS version is Snow Leopard. Background: Last night, rvm (ruby version manager) did rm -rf of the ~/ruby directory from the home directory. (This bug has since been fixed) Ideally, I'd like to know what files within the ~/ruby directory were deleted, but failing that, I'd like to know if rvm deleted anything outside of ~/ruby . In case anyone's wondering about backups...: Just about everything within ~/ruby is a git project that has a remote repo, and I have a fairly recent Time Machine backup (only 20 days old).

    Read the article

  • Branching and CI Builds with Agile

    - by Bob Horn
    We follow many agile processes, including automated tests, continuous integration, sprint reviews, etc... We're currently having a debate about how often we should branch release builds. We've been doing two-week sprints and trying to deploy to production at the end of each sprint. Some of us think we should be branching every sprint. Some of us think that's overkill. If a project encompasses three Visual Studio solutions, and we branch every sprint, then that's three branches, and three CI builds to create every two weeks. If we do this for six months, we'll end up with 36 branches and 36 CI builds. There is overhead involved in that. For those of us that think that branching every sprint is overkill, we don't have a very good alternative. On my last project, we deployed some solutions from the Main trunk. Yeah, that's not good, but it saved on some of the overhead. What's the right way to manage branching/releasing and CI builds, using agile, when we have such short (two-week) sprint cycles?

    Read the article

  • At what point would you drop some of your principles of software development for the sake of more money?

    - by MeshMan
    I'd like to throw this question out there to interestingly see where the medium is. I'm going to admit that in my last 12 months, I picked up TDD and a lot of the Agile values in software development. I was so overwhelmed with how much better my development of software became that I would never drop them out of principle. Until...I was offered a contracting role that doubled my take home pay for the year. The company I joined didn't follow any specific methodology, the team hadn't heard of anything like code smells, SOLID, etc., and I certainly wasn't going to get away with spending time doing TDD if the team had never even seen unit testing in practice. Am I a sell out? No, not completely... Code will always been written "cleanly" (as per Uncle Bob's teachings) and the principles of SOLID will always be applied to the code that I write as they are needed. Testing was dropped for me though, the company couldn't afford to have such a unknown handed to the team who quite frankly, even I did create test frameworks, they would never use/maintain the test framework correctly. Using that as an example, what point would you say a developer should never drop his craftsmanship principles for the sake of money/other benefits to them personally? I understand that this can be a very personal opinion on how concerned one is to their own needs, business needs, and the sake of craftsmanship etc. But one can consider that for example testing can be dropped if the company decided they would rather have a test team, than rather understand unit testing in programming, would that be something you could forgive yourself for like I did? So given that there is something you would drop, there usually should be an equal cost in the business that makes up for what you drop - hopefully, unless of course you are pretty much out for lining your own pockets and not community/social collaborating ;). Double your money, go back to RAD? Or walk on, and look for someone doing Agile, and never look back...

    Read the article

  • Windows 7 systeminfo reporting incorrect System Boot Time?

    - by rdingwall
    I work 9-5 and switch my PC off when I leave the office each day. When doing timesheets I need to know what time I got to work, so I usually use cmd systeminfo for finding the System Boot Time. Since upgrading to Windows 7 however, it's started reporting bizarre numbers between 11pm-2am instead of 8-9am. Today it says it booted at 11:34pm last night! I checked the event log and there is no entries between when I shutdown at 5:30pm yesterday and booted around 8am this morning. Has anyone else encountered this?

    Read the article

  • How to remove the smell of cigarette from the computer?

    - by Tio
    So I'm a smoker, and of course I smoke in front of the computer when I'm at home. On last Friday I moved out from my mothers house to my own, and since the computers were always turned on and the room was a room that everybody could smoke, the smell didn't bother me. At my new home, I turned them on, about 15 minutes ago, and I'm dying because of the smell of cigarette ( this may be kind of stupid from a smoker, but I hope some of you understand ). This solution has to be relatively quick, because I can't stay without the desktop and the server for one week, for example. Tomorrow, I'm going to open them both, and remove all dust there is inside, this should clear some of the smell, but probably won't remove it completely. Does anyone know a technique to get rid of the smell?

    Read the article

  • How can I save state from script in a multithreaded engine?

    - by Peter Ren
    We are building a multithreaded game engine and we've encountered some problems as described below. The engine have 3 threads in total: script, render, and audio. Each frame, we update these 3 threads simultaneously. As these threads updating themselves, they produce some tasks and put them into a public storage area. As all the threads finish their update, each thread go and copy the tasks for themselves one by one. After all the threads finish their task copying, we make the threads process those tasks and update these threads simultaneously as described before. So this is the general idea of the task schedule part of our engine. Ok, well, all the task schedule part work well, but here's the problem: For the simplest, I'll take Camera as an example: local oldPos = camera:getPosition() -- ( 0, 0, 0 ) camera:setPosition( 1, 1, 1 ) -- Won't work now, cuz the render thread will process the task at the beginning of the next frame local newPos = camera:getPosition() -- Still ( 0, 0, 0 ) So that's the problem: If you intend to change a property of an object in another thread, you have to wait until that thread process this property-changing message. As a result, what you get from the object is still the information in the last frame. So, is there a way to solve this problem? Or are we build the task schedule part in a wrong way? Thanks for your answers :)

    Read the article

  • Advancing my Embedded knowledge.....with a CS degree.

    - by Mercfh
    So I graduated last December with a B.S. in Computer Science, in a pretty good well known engineering college. However towards the end I realized that I actually like Assembly/Lower level C programming more than I actually enjoy higher level abstracted OO stuff. (Like I Programmed my own Device Drivers for USB stuff in Linux, stuff like that) But.....I mean we really didn't concentrate much on that in college, perhaps an EE/CE degree would've been better, but I knew the classes......and things weren't THAT much different. I've messed around with Atmel AVR's/Arduino stuff (Mostly robotics) and Linux Kernals/Device Drivers. but I really want to enhance my skills and maybe one day get a job doing embedded stuff. (I have a job now, it's An entry level software dev/tester job, it's a good job but not exactly what my passion lies in) (Im pretty good with C and certain ASM's for specific microcontrollers) Is this even possible with a CS degree? or am I screwed? (since technically my degree usually doesn't involve much embedded stuff) If Im NOT screwed then what should I be studying/learning? How would I even go about it........ I guess I could eventually say "Experienced with XXXX Microcontrollers/ASM/etc...." but still, it wouldn't be the same as having a CE/EE degree. Also....going back to college isn't an option. just fyi. edit: Any book recommendations for "getting used to this stuff" I have ARM System-on-Chip Architecture (2nd edition) it's good.....for ARM stuff lol

    Read the article

  • Script/tool to import series of snapshots, each being a new revision, into Subversion, populating source tree?

    - by Rob
    I've developed code locally and taken a fairly regular snapshot whenever I reach a significant point in development, e.g. a working build. So I have a long-ish list of about 40 folders, each folder being a snapshot e.g. in ascending date YYYYMMDD order, e.g.:- 20100523 20100614 20100721 20100722 20100809 20100901 20101001 20101003 20101104 20101119 20101203 20101218 20110102 I'm looking for a script to import each of these snapshots as a new subversion revision to the source tree. The end result being that the HEAD revision is the same as the last snapshot, and other revisions are as numbered. Some other requirements: that the HEAD revision is not cumulative of the previous snapshots, i.e., files that appeared in older snapshots but which don't appear in later ones (e.g. due to refactoring etc.) should not appear in the HEAD revision. meanwhile, there should be continuity between files that do persist between snapshots. Subversion should know that there are previous versions of these files and not treat them as brand new files within each revision. Some background about my aim: I need to formally revision control this work rather than keep local private snapshot copies. I plan to release this work as open source, so version controlling would be highly recommended I am evaluating some of the current popular version control systems (Subversion and GIT) BUT I definitely need a working solution in Subversion. I'm not looking to be persuaded to use one particular tool, I need a solution for each tool I am considering as I would also like a solution in GIT (I will post an answer separately for GIT so separate camps of folks who have expertise in GIT and Subversion will be able to give focused answers on one or the other). The same question but for GIT: Script/tool to import series of snapshots, each being a new edition, into GIT, populating source tree? An outline answer for Subversion in stackoverflow.com but not enough specifics about the script: what commands to use, code to check valid scenarios if necessary - i.e. a working script basically. http://stackoverflow.com/questions/2203818/is-there-anyway-to-import-xcode-snapshots-into-a-new-svn-repository

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >