Search Results

Search found 34513 results on 1381 pages for 'end task'.

Page 894/1381 | < Previous Page | 890 891 892 893 894 895 896 897 898 899 900 901  | Next Page >

  • how to properly implement alpha blending in a complex 3d scene

    - by Gajet
    I know this question might sound a bit easy to answer but It's driving me crazy. There are too many possible situations that a good alpha blending mechanism should handle, and for each Algorithm I can think of there is something missing. these are the methods I've though about so far: first of I though about object sorting by depth, this one simply fails because Objects are not simple shapes, they might have curves and might loop inside each other. so I can't always tell which one is closer to camera. then I thought about sorting triangles but this one also might fail, thought I'm not sure how to implement it there is a rare case that might again cause problem, in which two triangle pass through each other. again no one can tell which one is nearer. the next thing was using depth buffer, at least the main reason we have depth buffer is because of the problems with sorting that I mentioned but now we get another problem. Since objects might be transparent, in a single pixel there might be more than one object visible. So for which Object should I store pixel depth? I then thought maybe I can only store the most front Object depth, and using that determine how should I blend next draw calls at that pixel. But again there was a problem, think about 2 semi transparent planes with a solid plane in middle of them. I was going to render the solid plane at the end, one can see the most distant plane. note that I was going to merge every two planes until there is only one color left for that pixel. Obviously I can use sorting methods too because of the same reasons I've explained above. Finally the only thing I imagine being able to work is to render all objects into different render targets and then sort those layers and display the final output. But this time I don't know how can I implement this algorithm.

    Read the article

  • Herding Cats - That's My Job....

    - by user709270
    Written by Mike Schmitz - Sr. Director, Program Management Oracle JD Edwards  I remember seeing a super bowl commercial several years ago showing some well dressed people on the African savanna herding cats. I remember turning to the people I was watching the game with and telling them, “You just watched my job description”. Releasing software is a multi-facetted undertaking. In addition to making sure the code changes are complete, you also need to make sure the other key parts of a release are ready. For example when you have a question about the software, will the person on the other end of the phone be ready to answer your question? If you need training on that cool new piece of functionality, will there be an online training course ready for you to review? If you want to read about how the software is supposed to function, is there a user manual available? Putting all the release pieces together so they are available at the same time is what the JD Edwards Program Management team does. It is my team’s job to work with all the different functional teams so when a release is made generally available you have all the things you need to be successful. The JD Edwards Program Management team uses an internal planning tool called the Release Process Model (RPM) to ensure all deliverables are accounted for in a release. The RPM makes sure all the release deliverables are ready at the correct time and in the correct format. The RPM really helps all the functional teams in JD Edwards know what release deliverables they are accountable for and when they are to be delivered. It is my team’s job to make sure everyone understands what they need to do and when they need to deliver. We then make sure they are all on track to deliver on-time and in the right format. It is just that some days this feels like herding cats.

    Read the article

  • How to make my simple round sprite look right in XNA

    - by Joshua Perina
    Ok, I'm very new to graphics programming (but not new to coding). I'm trying to load a simple image in XNA which I can do fine. It is a simple round circle which I made in photoshop. The problem is the edges show up rough when I draw it on the screen even with the exact size. The anti-aliasing is missing. I'm sure I'm missing something very simple: GraphicsDevice.Clear(Color.Black); // TODO: Add your drawing code here spriteBatch.Begin(); spriteBatch.Draw(circle, new Rectangle(10, 10, 10, 10), Color.White); spriteBatch.End(); Couldn't post picture because I'm a first time poster. But my smooth png circle has rough edges. So I found that if I added: spriteBatch.Begin(SpriteSortMode.FrontToBack, BlendState.NonPremultiplied); I can get a smooth image when the image is the same size as the original png. But if I want to scale that image up or down then the rough edges return. How do I get XNA to smoothly resize my simple round image to a smaller size without getting the rough edges?

    Read the article

  • Ubuntu Server 11.10 boot, white terminal with garbled black text

    - by SpeedCrazy
    I just installed Ubuntu server 11.10 and the install went fine. This system is running on an Intel Pentium II board with onboard graphics. However when I try to boot into Ubuntu I get a white terminal with garbled black text. I have tried various grub 'fixes' as googling the issue seemed to suggest it was a res or grub related issue. I cannot ssh in so the issue does affect Linux as well. I have had no luck with anything thus far and am at my wits end. This was my first Ubuntu excursion as my friend told me it was better for servers than CentOS because it was easier... Not so much.... Does anyone have any ideas as to what the issue could be? When answering bear in mind I am an Ubuntu noob and Linux novice. As of 1/26/12 I have tried to add the console=ttyl line to the /etc/default/grub and run update-grub. This results in the line in the boot parameters that normally reads: linux /vmlunz-3.0.0-12-generic-pae root=/dev/mapper/dev-root rovt.handoff=7 now reads: linux /vmlunz-3.0.0-12-generic-pae root=/dev/mapper/dev-root ro console=ttyl vt.handoff=7 This does not work. Is there anyway to have console=ttyl inserted on a line by itself? I am at my wits ends, Thanks for all your help, Speed

    Read the article

  • Is it usual if my employer asks me to get MCP certificates for higher salary?

    - by Vimvq1987
    I just got a salary negotiation this morning (I passed three interviews last 3 weeks), and it was like a game. I was stubborn with my expectation, or that number, or I leave. OK, to be honest, it's not about money, but I, a not-very-experienced developer, want to see how much the employer pays me, and it was fun. And at last, my employer gave me this: "OK, * $, but with two conditions, first, you get your spoken skill improved (English is not my native), and second, you got MCPs before the end of the year". He asked me to get 3 MCP certificates. The company will buy any books that necessary to the exam, but I must read them at free time, take and pass the exams . If I not get them, my employer will not kick me out, but, salary discussion will be harder, for me. I accepted that offer, I thought it's good enough. But I wonder, is it usual? If you're an employer, have you ever given that offer to a candidate? If you're an employee, have you ever got, or will you accept an offer like that?

    Read the article

  • Sample domain model for online store

    - by Carel
    We are a group of 4 software development students currently studying at the Cape Peninsula University of Technology. Currently, we are tasked with developing a web application that functions as a online store. We decided to do the back-end in Java while making use of Google Guice for persistence(which is mostly irrelevant for my question). The general idea so far to use PHP to create the website. We decided that we would like to try, after handing in the project, and register a business to actually implement the website. The problem we have been experiencing is with the domain model. These are mostly small issues, however they are starting to impact the schedule of our project. Since we are all young IT students, we have virtually no experience in the business world. As such, we spend quite a significant amount of time planning the domain model in the first place. Now, some of the issues we're picking up is say the reference between the Customer entity and the order entity. Currently, we don't have the customer id in the order entity and we have a list of order entities in the customer entity. Lately, I have wondered if the persistence mechanism will put the client id physically in the order table, even if it's not in the entity? So, I started wondering, if you load a customer object, it will search the entire order table for orders with the customer's id. Now, say you have 10 000 customers and 500 000 orders, won't this take an extremely long time? There are also some business processes that I'm not completely clear on. Finally, my question is: does anyone know of a sample domain model out there that is similar to what we're trying to achieve that will be safe to look at as a reference? I don't want to be accused of stealing anybody's intellectual property, especially since we might implement this as a business.

    Read the article

  • XNA on the TechNet Wiki

    - by Michael B. McLaughlin
    Many months ago I came across an interesting Microsoft website, the TechNet Wiki, when I was looking for information about something that I can’t even remember anymore. I noticed at the time that its section on gaming technologies was sparse and even exchanged a few emails with one of the friendly Microsoft employees who contributes there regularly about some ideas I had for the site. I seem to recall mentioning my intentions to add some articles on XNA when I found the time but between one thing and another it seemed like I was busy from the end of last Summer straight through ‘til now. Yesterday I came across the TechNet Wiki link in my miscellaneous links collection and remembered my intentions many months ago. I decided that adding XNA pages to it would make a nice project to work on while taking breaks from my other projects. So I wrote my first two articles for it: XNA Framework Overview and Content Pipeline Overview. I hope to add more in the coming days and weeks. I’d be delighted if some of my fellow XNA enthusiasts out there joined in, time permitting. Anyone else who’d like to add a page or two on a topic area you’re familiar with, this seems like a great opportunity to contribute to the community and help build a nice knowledge base to benefit all of us who are always interested in learning something new!

    Read the article

  • Why is my query soooooo slow?

    - by geekrutherford
    A stored procedure used in our production environment recently became so slow it cause the calling web service to begin timing out. When running the stored procedure in Query Analyzer it took nearly 3 minutes to complete.   The stored procedure itself does little more than create a small bit of dynamic SQL which calls a view with a where clause at the end.   At first the thought was that the query used within the view needed to be optimized. The query is quite long and therefore easy to jump to this conclusion.   Fortunately, after bringing the issue to the attention of a coworker they asked "is there a where clause, and if so, is there an index on the column(s) in it?" I had no idea and quickly said as much. A quick check on the table/column utilized in the where clause indicated indeed there was no index.   Before adding the index, and after admitting I am no SQL wiz, I checked the internet for info on the difference between clustered and non-clustered indexes. I found the following site quite helpful OdeToCode. After adding the non-clustered index on the column, the query that used to take nearly 3 minutes now takes 10 seconds! Ah, if only I'd thought to do this ahead of time!

    Read the article

  • Why did the web win the space of remote applications and X not?

    - by Martin Josefsson
    The X Window System is 25 years old, it had it's birthday yesterday (on the 15'th). As you probably are aware of, one of it's most important features is the separation of the server side and the client side in a way that neither Microsoft's, Apples or Wayland's windowing systems have. Back in the days (sorry for the ambiguous phrasing) many believed X would dominate over other ways to make windows because of this separation of server and client, allowing the application to be ran on a server somewhere else while the user clicks and types on her own computer at home. This use obviously still exists, but is marginalized at best. When we write and use programs that run on a server we almost always use the web with it's html/css/js. Why did the web win, and X not? The technologies used for the web (said html/css/js) are a mess. Combined with all the back-end-frameworks (Rails, Django and all) it really is a jungle to navigate thru. Still the web thrives with creativity and progress, while remote X apps do not.

    Read the article

  • What can I do to utilize all my hard disk space?

    - by Twatcher
    I had windows XP running on my computer. Then I installed Ubuntu from under windows. Then I decided I wanted to have only Ubuntu also because I got a system message that I am out of disk space. I loaded up my system from a live Ubuntu DVD and deleted the partition with windows on it and also the other partition that had my data on it. I expanded the partition which I thought to be the system partition (since there was no other partition left It had ext format. After that Ubuntu was working fine and I thought I have enough disk space, since my harddrive is an 80 GB ATA Maxtor. I left a small partition as backup. But after downloading a small amount of files I got the message again, that I am running out of disk space. I don't now. How can UI make my disk space bigger? I am not used to Ubuntu's file system, and I don't have the overview on how I can actually see how much space there is left for me to use. I have basically now 1 partition with the system on it and one small backup (as far as I understand). My system is (from system utility) Ubuntu 12.04 LS 3,9 GB Intel Core 2 2,4 Ghz 80 GB ATA Maxtor Here are the results for sudo fdisk -l Disk /dev/sda: 80.0 GB, 79998918144 bytes<br> 255 heads, 63 sectors/track, 9725 cylinders, total 156247887 sectors<br> Units = sectors of 1 * 512 = 512 bytes<br> Sector size (logical/physical): 512 bytes / 512 bytes<br> I/O size (minimum/optimal): 512 bytes / 512 bytes<br> Disk identifier: 0x41ab2316<br> Device Boot Start End Blocks Id System<br> /dev/sda1 * 63 123750399 61875168+ 7 HPFS/NTFS/exFAT<br> /dev/sda2 123750400 156246015 16247808 7 HPFS/NTFS/exFAT<br>

    Read the article

  • Implementing a switch statement based on user input

    - by Dave Voyles
    I'm trying to delay the time it takes for the main menu screen to pop up after a user has won / lost a match. As it stands, the game immediately displays a message stating "you won / lost" and waits for 6 seconds before loading the menu screen. I would also like players to have the ability to press a key to advance to the menu screen immediately but thus far my switch statement doesn't seem to do the trick. I've included the switch statement, along with my (theoretical) inputs. What could I be doing wrong here? if (gamestate == GameStates.End) switch (input.IsMenuDown(ControllingPlayer)) { case true: ScreenManager.AddScreen(new MainMenuScreen(), null); // Draws the MainMenuScreen break; case false: if (screenLoadDelay > 0) { screenLoadDelay -= gameTime.ElapsedGameTime.TotalSeconds; } ScreenManager.AddScreen(new MainMenuScreen(), null); // Draws the MainMenuScreen break; } /// <summary> /// Checks for a "menu down" input action. /// The controllingPlayer parameter specifies which player to read /// input for. If this is null, it will accept input from any player. /// </summary> public bool IsMenuDown(PlayerIndex? controllingPlayer) { PlayerIndex playerIndex; return IsNewKeyPress(Keys.Down, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.DPadDown, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.LeftThumbstickDown, controllingPlayer, out playerIndex); }

    Read the article

  • Enterprise Instrumentation: The 'sessionName' parameter of value 'TraceSession' is not valid

    - by Michael Freidgeim
    We are still using Enterprise Instrumentation(that was created during .Net 1.1 time)In new Server 2008 environment and IIS 7 we have the following errors:The 'sessionName' parameter of value 'TraceSession' is not valid. A trace session of this name does not exist in the TraceSessions configuration file for Windows Trace Session Manager service. Ensure that a session of this name exists in the TraceSessions configuration file and that the Windows Trace Session Manager service is started.   at Microsoft.EnterpriseInstrumentation.EventSinks.TraceEventSink..ctor(IDictionary parameters, EventSource eventSource)   --- End of inner exception stack trace ---   at System.RuntimeMethodHandle._InvokeConstructor(IRuntimeMethodInfo method, Object[] args, SignatureStruct& signature, RuntimeType declaringType)   at System.Reflection.RuntimeConstructorInfo.Invoke(BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)   at System.RuntimeType.CreateInstanceImpl(BindingFlags bindingAttr, Binder binder, Object[] args, CultureInfo culture, Object[] activationAttributes)   at Microsoft.EnterpriseInstrumentation.EventSinks.EventSink.CreateNewEventSinks(DataRow[] eventSinkRows, EventSource eventSource)I’ve seen the same errors on development Win7 machines when using IIS. It seems not a problem on Cassini.I've checked ,that Windows Trace Session Manager Service has started and The file C:\Program Files (x86)\Microsoft Enterprise Instrumentation\Bin\Trace Service\TraceSessions.config has corresponding entry<?xml version="1.0" encoding="utf-8" ?><configuration >                <defaultParameters minBuffers="4" maxFileSize="10" maxBuffers="25" bufferSize="20" logFileMode="sequential" flushTimer="3" />                <sessionList>                                 <session name="TraceSession" enabled="false" fileName="C:\Program Files (x86)\Microsoft Enterprise Instrumentation\Bin\Trace Service\Logs\TraceLog.log" />                </sessionList></configuration>The errors still continue, but I was able to disable  the parameter in  eventSink configuration   <eventSink name=" traceSink" description=" Outputs events to the Windows Event Trace." type ="Microsoft.EnterpriseInstrumentation.EventSinks.TraceEventSink ">                <!-- MNF disabled parameter to  avoid error "The 'sessionName' parameter of value 'TraceSession' is not valid"                      < parameter name ="sessionName " value ="TraceSession " />                    -->    </ eventSink>Related old post http://bytes.com/topic/net/answers/104761-enterprise-instrumentation-windows-trace-session-managerOne day I wish to replace all EnterpriseInstrumentation calls with NLog.

    Read the article

  • iOS Support with Windows Azure Mobile Services – now with Push Notifications

    - by ScottGu
    A few weeks ago I posted about a number of improvements to Windows Azure Mobile Services. One of these was the addition of an Objective-C client SDK that allows iOS developers to easily use Mobile Services for data and authentication.  Today I'm excited to announce a number of improvement to our iOS SDK and, most significantly, our new support for Push Notifications via APNS (Apple Push Notification Services).  This makes it incredibly easy to fire push notifications to your iOS users from Windows Azure Mobile Service scripts. Push Notifications via APNS We've provided two complete tutorials that take you step-by-step through the provisioning and setup process to enable your Windows Azure Mobile Service application with APNS (Apple Push Notification Services), including all of the steps required to configure your application for push in the Apple iOS provisioning portal: Getting started with Push Notifications - iOS Push notifications to users by using Mobile Services - iOS Once you've configured your application in the Apple iOS provisioning portal and uploaded the APNS push certificate to the Apple provisioning portal, it's just a matter of uploading your APNS push certificate to Mobile Services using the Windows Azure admin portal: Clicking the “upload” within the “Push” tab of your Mobile Service allows you to browse your local file-system and locate/upload your exported certificate.  As part of this you can also select whether you want to use the sandbox (dev) or production (prod) Apple service: Now, the code to send a push notification to your clients from within a Windows Azure Mobile Service is as easy as the code below: push.apns.send(deviceToken, {      alert: 'Toast: A new Mobile Services task.',      sound: 'default' }); This will cause Windows Azure Mobile Services to connect to APNS (Apple Push Notification Service) and send a notification to the iOS device you specified via the deviceToken: Check out our reference documentation for full details on how to use the new Windows Azure Mobile Services apns object to send your push notifications. Feedback Scripts An important part of working with any PNS (Push Notification Service) is handling feedback for expired device tokens and channels. This typically happens when your application is uninstalled from a particular device and can no longer receive your notifications. With Windows Notification Services you get an instant response from the HTTP server.  Apple’s Notification Services works in a slightly different way and provides an additional endpoint you can connect to poll for a list of expired tokens. As with all of the capabilities we integrate with Mobile Services, our goal is to allow developers to focus more on building their app and less on building infrastructure to support their ideas. Therefore we knew we had to provide a simple way for developers to integrate feedback from APNS on a regular basis.  This week’s update now includes a new screen in the portal that allows you to optionally provide a script to process your APNS feedback – and it will be executed by Mobile Services on an ongoing basis: This script is invoked periodically while your service is active. To poll the feedback endpoint you can simply call the apns object's getFeedback method from within this script: push.apns.getFeedback({       success: function(results) {           // results is an array of objects with a deviceToken and time properties      } }); This returns you a list of invalid tokens that can now be removed from your database. iOS Client SDK improvements Over the last month we've continued to work with a number of iOS advisors to make improvements to our Objective-C SDK. The SDK is being developed under an open source license (Apache 2.0) and is available on github. Many of the improvements are behind the scenes to improve performance and memory usage. However, one of the biggest improvements to our iOS Client API is the addition of an even easier login method.  Below is the Objective-C code you can now write to invoke it: [client loginWithProvider:@"twitter"                     onController:self                        animated:YES                      completion:^(MSUser *user, NSError *error) {      // if no error, you are now logged in via twitter }]; This code will automatically present and dismiss our login view controller as a modal dialog on the specified controller.  This does all the hard work for you and makes login via Twitter, Google, Facebook and Microsoft Account identities just a single line of code. My colleague Josh just posted a short video demonstrating these new features which I'd recommend checking out: Summary The above features are all now live in production and are available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using Mobile Services today. Visit the Windows Azure Mobile Developer Center to learn more about how to build apps with Mobile Services. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Groovy JUnit test support

    - by Martin Janicek
    Good news everyone! I've implemented support for the Groovy JUnit tests which basically means you can finally use Groovy in the area where is so highly productive! You can create a new Groovy JUnit test in the New File/Groovy/Groovy JUnit test and it should behave in the same way as for Java tests. Which means if there is no JUnit setup in your project yet, you can choose between JUnit 3 and JUnit 4 template and with respect to your choice the project settings will be changed (in case of the Maven based projects the correct dependencies and plugins are added to the pom.xml and in case of the Ant based project the JUnit dependency is configured). Or if the project is already configured, the correct template will be used. After that the test skeleton is created and you can write your own code and of course run the tests together with the java ones. Some of you were asking for this feature and of course I don't expect it will be perfect from the beginning so I would be really glad to see some constructive feedback about what could be improved and/or redesigned ;] ..at the end I have to say that the feature is not active for the Ant based Java EE projects yet (I'm aware of it and it will be fixed to the NetBeans 7.3 final - actually it will be done in a few days/weeks, just want you to know). But it's already complete in all types of the Maven based projects and also for the Ant based J2SE projects. And as always, the daily build where you can try the feature can be downloaded right here, so don't hesitate to try it!

    Read the article

  • No Thank You &ndash; Yours Truly &ndash; F#

    - by MarkPearl
    I am plodding along with my F# book. I have reached the part where I know enough about the syntax of the language to understand something if I read it – but not enough about the language to be productive and write something useful. A bit of a frustrating place to be. Needless to say when you are in this state of mind – you end up paging mindlessly through chapters of my F# book with no real incentive to learn anything until you hit “Exceptions”. Raising an exception explicitly So lets look at raising an exception explicitly – in C# we would throw the exception, F# is a lot more polite instead of throwing the exception it raises it, … (raise (System.InvalidOperationException("no thank you"))) quite simple… Catching an Exception So I would expect to be able to catch an exception as well – lets look at some C# code first… try { Console.WriteLine("Raise Exception"); throw new InvalidOperationException("no thank you"); } catch { Console.WriteLine("Catch Exception and Carry on.."); } Console.WriteLine("Carry on..."); Console.ReadLine();   The F# equivalent would go as follows… open System; try Console.WriteLine("Raise Exception") raise (System.InvalidOperationException("no thank you")) with | _ -> Console.WriteLine("Catch Exception and Carry on..") Console.WriteLine("Carry on...") Console.ReadLine();   In F# there is a “try, with” and a “try finally” Finally… In F# there is a finally block however the “with” and “finally” can’t be combined. open System; try Console.WriteLine("Raise Exception") raise (System.InvalidOperationException("no thank you")) finally Console.WriteLine("Finally carry on...") Console.ReadLine()

    Read the article

  • Modular Database Structures

    - by John D
    I have been examining the code base we use in work and I am worried about the size the packages have grown to. The actual code is modular, procedures have been broken down into small functional (and testable) parts. The issue I see is that we have 100 procedures in a single package - almost an entire domain model. I had thought of breaking these packages down - to create sub domains that are centered around the procedure relationships to other objects. Group a bunch of procedures that have 80% of their relationships to three tables etc. The end result would be a lot more packages, but the packages would be smaller and I feel the entire code base would be more readable - when procedures cross between two domain models it is less of a struggle to figure which package it belongs to. The problem I now have is what the actual benefit of all this would really be. I looked at the general advantages of modularity: 1. Re-usability 2. Asynchronous Development 3. Maintainability Yet when I consider our latest development, the procedures within the packages are already reusable. At this advanced stage we rarely require asynchronous development - and when it is required we simply ladder the stories across iterations. So I guess my question is if people know of reasons why you would break down classes rather than just the methods inside of classes? Right now I do believe there is an issue with these mega packages forming but the only benefit I can really pin down to break them down is readability - something that experience gained from working with them would solve.

    Read the article

  • How can I get nvidia-96 installed?

    - by Bob
    I'm at my wits end here. This is my last effort before I go back to Windows. I need to get nvidia-96 proprietary driver installed. Synaptic won't install it because it says it has dependencies. I installed every single dependency it listed except for "xorg-video-abi-10" which does not show up as an item that can be installed. I have no idea what to do. Using 11.10 with a NVIDIA Geforce 3 GPU. Anyone know how to get this dang driver installed? @fossfreedom: the opensource driver is extremely slow. So slow that the OS is unusable—words appear seconds after I type them—programs take forever to perform actions. Also it is causing my monitor to turn on and off for no reason. @yossile: synaptic shows that I have xserver-xorg-core installed. And xserver-xorg-core-udeb does not show up as something that can be installed. @papseddy: when I try to install the downloaded nvidia driver it says it won't work until I disable Nouveau kernel driver. I have tried everything to get this dang Nouveau kernel driver disabled. Nothing has been successful.

    Read the article

  • Ubuntu 12.04 and Nvidia GTX 550 Ti

    - by Jim
    OK I'm currently trying to install Ubuntu x64 Server 12.04 onto the following machine: Intel Core i5-2320 3.0GHz LGA1155 6MB 8 GB DDR3 RAM, Gigabyte Z68P-DS3 S1155 Intel Z68 DDR3 ATX M/B, OCZ 60 GB SSD, 3x Samsung 2TB drives in a RAID 5 array (via M/B) Now what I think is causing the issue is the following: EVGA GeForce GTX 550 Ti 951MHz 1GB PCI-Express HDMI FPB As the server CD works in text mode I haven't had a problem with actually installing Ubuntu. Partioned with: 1GB /boot SSD, 59GB / SSD, 10GB swap RAID5, ~4TB /home RAID5 On a straight boot, you briefly see the GRUB menu, followed by a blank screen. The keyboard and mouse blink as they are initialised but no sign of life from the screen. Followed by a bit of research (otherwise known as google) ... Booted in quiet splash nomodeset Now I have a fully working linux distro at the command prompt. I then proceed to try and update the nvidia drivers with apt-get (after updating repositories etc) and rebooting. Still the same problem. I also tried reinstalling from the CD and installing said drivers in the install process before GRUB was installed, still the same symptoms. Does anybody have any solutions? I'm at my wits end here, I bought this machine to be a linux server / tinkering machine and have just spent 4-5 hours trying to just get a basic install working.

    Read the article

  • I can't enable extra effects in Ubuntu 10.10. Please help?

    - by jasoncruz98
    I installed Ubuntu 10.10 alongside Ubuntu 11.10 to use an older version of Compiz. On Ubuntu 11.10, Compiz was enabled by default and I didn't need to use any graphics driver to enjoy the effects. All I had to do was install CompizConfig Settings Manager and enable those extra effects. That was Compiz 0.9.6. Now, after installing Ubuntu 10.10, when I first logged in, the graphics were slow. When I dragged a window from one end of the screen to the other, the whole screen would blur up and pixelate and it would be very laggy. I tried going to System Preference Appearance and selecting Extra effects on the Visual effects tab, but all I got was "Desktop effects could not be enabled". I don't know whether I should install the Additional drivers (proprietary) because my Internet is slow and it would take a long time. Furthermore, in Ubuntu 11.10, after I installed the proprietary graphics driver, I immediately went into fallback mode and wasn't even offered an option to set my desktop session to Ubuntu 3D. I didn't need the driver to run Compiz in Ubuntu 11.10, it just ran so smoothly. But in Ubuntu 10.10, everything is so laggy. Should I install the ATI/AMD Proprietary FGLRX Graphics Driver for Ubuntu 10.10 to enable extra effects? Or is there something else wrong with my system? Here is the output of lspci -nn | grep VGA 00:02.0 VGA compatible controller [0300]: Intel Corporation Sandy Bridge Integrated Graphics Controller [8086:0116] (rev 09) 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc Device [1002:6760] Here is the output of the same command, but in Ubuntu 11.10 (in this case the one which is correct, because I don't have the Sandy Bridge Integrated graphics controller) 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09) 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc NI Seymour [AMD Radeon HD 6470M] [1002:6760]

    Read the article

  • Generate a Word document from list data

    - by PeterBrunone
    This came up on a discussion list lately, so I threw together some code to meet the need.  In short, a colleague needed to take the results of an InfoPath form survey and give them to the user in Word format.  The form data was already in a list item, so it was a simple matter of using the SharePoint API to get the list item, formatting the data appropriately, and using response headers to make the client machine treat the response as MS Word content.  The following rudimentary code can be run in an ASPX (or an assembly) in the 12 hive.  When you link to the page, send the list name and item ID in the querystring and use them to grab the appropriate data. // Clear the current response headers and set them up to look like a word doc.HttpContext.Current.Response.Clear();HttpContext.Current.Response.Charset ="";HttpContext.Current.Response.ContentType ="application/msword";string strFileName = "ThatWordFileYouWanted"+ ".doc";HttpContext.Current.Response.AddHeader("Content-Disposition", "inline;filename=" + strFileName);// Using the current site, get the List by name and then the Item by ID (from the URL).string myListName = HttpContext.Current.Request.Querystring["listName"];int myID = Convert.ToInt32(HttpContext.Current.Request.Querystring["itemID"]);SPSite oSite = SPContext.Current.Site;SPWeb oWeb = oSite.OpenWeb();SPList oList = oWeb.Lists["MyListName"];SPListItem oListItem = oList.Items.GetItemById(myID);// Build a string with the data -- format it with HTML if you like. StringBuilder strHTMLContent = newStringBuilder();// *// Here's where you pull individual fields out of the list item.// *// Once everything is ready, spit it out to the client machine.HttpContext.Current.Response.Write(strHTMLContent);HttpContext.Current.Response.End();HttpContext.Current.Response.Flush();

    Read the article

  • Still Alive&hellip;

    - by MOSSLover
    As Glados would say at the end of Portal “I’m still alive…”.  I am around, but I’m just not posting as frequently as I should.  I am trying to get acclimated to my new job, planning SharePoint Saturday New York City and Women in SharePoint plus trying to lead a normal life doing normal chores and hang out with my boyfriend.  What does this mean?  Well I’m trying to cut back to one or two events a month, which will include Heartland Developer Conference, Best Practices Conference, SPS Ozarks, SPS NYC (not speaking, running), and maybe SPS Denver and/or SPS East Bay.  So with the new job acclimation the blog suffers and twitter is getting less loven.  I’m only posting on twitter at night.  I will try to blog when I can as I see more 2010 and 2007 things that I find interesting to share.  I guess when you are a new employee you try to figure out what’s going on the first few months.  It’s really hard to post on SharePoint issue while that happens.  I’m really sorry guys and I will try harder to post at least a couple times a month (and maybe moderate comments  slightly better).  I hope that you all have a good weekend.

    Read the article

  • A Big Congrats to Hitachi- Now a Diamond Level Partner!

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} We’ve said this before and we’ll say it again, nothing sparkles quite like a diamond. Being named Diamond level partner is the highest ranking available in the Oracle PartnerNetwork Specialized program, and we could not be more excited to congratulate Hitachi, Ltd on their new glistening title! Hitachi has helped clients maximize the business benefits of their Oracle Applications for more than two decades, while also helping them bring their business visions to life through industry led services and solutions. Not only has Hitachi achieved Diamond level status, they also have 29 additional Oracle Specializations! Here at OPN we know that it all starts with equipping our partners with the proper tools to help their end customers succeed, and we truly believe that diamonds partners are forever! So here’s to you Hitachi, our diamond in the ruff! Shine On, The OPN Communications Team

    Read the article

  • Cube chunk via list ToArray()

    - by Christian Frantz
    I've created a list of vertices that I call for each cube made in my array "cubes". When each cube is create, SetUpVertices is called which is a method that stores the 8 vertices of my cube. At the end of my list creation, I create a vertex buffer, and set the data of the list that contains vertices of all 25 cubes to that vertex buffer, effectively creating a "chunk" of cubes. The problem is that Invalid Operation Exception "The array is not the correct size for the amount of data requested." at the line vertices.ToArray(). I don't have an array for this, as the amount of cubes will be changing and arrays aren't dynamic. What could be the cause of this? for (int x = 0; x < 5; x++) { for (int z = 0; z < 5; z++) { SetUpVertices(); cubes.Add(new Cube(device, new Vector3(x, map[x, z], z), color)); } } vertexBuffer = new VertexBuffer(device, typeof(VertexPositionColor), 8, BufferUsage.WriteOnly); vertexBuffer.SetData<VertexPositionColor>(vertices.ToArray());

    Read the article

  • You may be tempted by IaaS, but you should PaaS on that or your database cloud journey will be a short one

    - by B R Clouse
    Before we examine Consolidation, the next step in the journey to cloud, let's take a short detour to address a critical choice you will face at the outset of your journey: whether to deploy your databases in virtual machines or not. A common misconception we've encountered is the belief that moving to cloud computing can be accomplished by simply hosting one's current operating environment as-is within virtual machines, and then stacking those VMs together in a consolidated environment.  This solution is often described as "Infrastructure as a Service" (IaaS) because the building block for deployments is a VM, which behaves like a full complement of infrastructure.  This approach is easy to understand and may feel like a good first step, but it won't take your databases very far in the journey to cloud computing.  In fact, if you follow the IaaS fork in the road, your journey will end quickly, without realizing the full benefits of cloud computing.  The better option to is to rationalize the deployment stack so that VMs are needed only for exceptional cases.  By settling on a standard operating system and patch level, you create an infrastructure that potentially all of your databases can share.  Now, the building block will be database instances or possibly schemas within databases.  These components are the platforms on which you will deploy workloads, hence this is known as "Platform as a Service" (PaaS). PaaS opens the door to higher degrees of consolidation than IaaS, because with PaaS you will not need to accommodate the footprint (operating system, hypervisor, processes, ...) that each VM brings with it.  You will also reduce your maintenance overheard if you move forward without the VMs and their O/Ses to patch and monitor.  So while IaaS simply shuffles complex and varied environments into VMs,  PaaS actually reduces complexity by rationalizing to the small possible set of components.  Now we're ready to look at the consolidation options that PaaS provides -- in our next blog posting.

    Read the article

  • Organization standards for large programs

    - by Chronicide
    I'm the only software developer at the company where I work. I was hired straight out of college, and I've been working here for several years. When I started, eveeryone was managing their own data as they saw fit (lots of filing cabinets). Until recently, I've only been tasked with small standalone projects to help with simple workflows. In the beginning of the year I was asked to make a replacement for their HR software. I used SQL Server, Entity Framework, WPF, along with MVVM and Repository/Unit of work patterns. It was a huge hit. I was very happy with how it went, and it was a very solid program. As such, my employer asked me to expand this program into a corporate dashboard that tracks all of their various corporate data domains (People, Salary, Vehicles/Assets, Statistics, etc.) I use integrated authentication, and due to the initial HR build, I can map users to people in positions, so I know who is who when they open the program, and I can show each person a customized dashboard given their work functions. My concern is that I've never worked on such a large project. I'm planning, meeting with end users, developing, documenting, testing and deploying it on my own. I'm part way through the second addition, and I'm seeing that my code is getting disorganized. It's still programmed well, I'm just struggling with the organization of namespaces, classes and the database model. Are there any good guidelines to follow that will help me keep everything straight? As I have it now, I have folders for Data, Repositories/Unit of Work, Views, View Models, XAML Resources and Miscellaneous Utilities. Should I make parent folders for each data domain? Should I make separate EF models per domain instead of the one I have for the entire database? Are there any standards out there for organizing large programs that span multiple data domains? I would appreciate any suggestions.

    Read the article

< Previous Page | 890 891 892 893 894 895 896 897 898 899 900 901  | Next Page >