Search Results

Search found 29638 results on 1186 pages for 'phone number'.

Page 741/1186 | < Previous Page | 737 738 739 740 741 742 743 744 745 746 747 748  | Next Page >

  • Globacom and mCentric Deploy BDA and NoSQL Database to analyze network traffic 40x faster

    - by Jean-Pierre Dijcks
    In a fast evolving market, speed is of the essence. mCentric and Globacom leveraged Big Data Appliance, Oracle NoSQL Database to save over 35,000 Call-Processing minutes daily and analyze network traffic 40x faster.  Here are some highlights from the profile: Why Oracle “Oracle Big Data Appliance works well for very large amounts of structured and unstructured data. It is the most agile events-storage system for our collect-it-now and analyze-it-later set of business requirements. Moreover, choosing a prebuilt solution drastically reduced implementation time. We got the big data benefits without needing to assemble and tune a custom-built system, and without the hidden costs required to maintain a large number of servers in our data center. A single support license covers both the hardware and the integrated software, and we have one central point of contact for support,” said Sanjib Roy, CTO, Globacom. Implementation Process It took only five days for Oracle partner mCentric to deploy Oracle Big Data Appliance, perform the software install and configuration, certification, and resiliency testing. The entire process—from site planning to phase-I, go-live—was executed in just over ten weeks, well ahead of the four months allocated to complete the project. Oracle partner mCentric leveraged Oracle Advanced Customer Support Services’ implementation methodology to ensure configurations are tailored for peak performance, all patches are applied, and software and communications are consistently tested using proven methodologies and best practices. Read the entire profile here.

    Read the article

  • Belarc Advisor (Store Passwords using Reversible Encryption)

    - by Steve
    Hi, I'm using Belarc Advisor to examine my PC. Part of BA is a security benchmark summary, which examines components of windows security and provides a benchmark rating. Two items are marked as Fail: - Store Passwords using Reversible Encryption - Password History Size I have opened the Local Security Settings tool from the Control Panel Administrative Tools, and ensured that the "Store passwords using reversible encryption" setting is enabled. Also, I've set the password history to a number. So I'm a bit miffed about the Fail marks. Any idea why the Fail marks appear? Any clues how I can Pass them? Thanks, Steve.

    Read the article

  • ArchBeat Link-o-Rama for December 12, 2012

    - by Bob Rhubart
    “Cloud Integration in Minutes” – True or False? | Bruce Tierney The answer is 'True, but..." according to Bruce Tierney. "Connecting on-premise and cloud applications “in minutes” is true…provided you only consider the connectivity subset of integration and have a small number of cloud integration touch points." Get the rest of the story in Bruce's detailed post. Tech World Discovers New Species: The Cloud Architect | Wired Enterprise | Wired.com This Wired article by Cade Metz boils down to one essential conclusion: Cloud computing is a significant departure from "data center designs of the past," and the demand for the specialized skills of the cloud architect will only increase. But you already knew that, right? Oracle B2B - Synchronous Request Reply | A-Team - SOA "Beginning with Oracle SOA Suite PS5 (11.1.1.6), B2B supports synchronous request reply over http using the b2b/syncreceiver servlet," says C. D. Wright of the Fusion Middleware A-Team. His post includes a demo and everything you need to run it. Thought for the Day "Don't worry about what anybody else is going to do… The best way to predict the future is to invent it." — Alan Kay (Month Day, Year - Month Day, Year) Source: SoftwareQuotes.com

    Read the article

  • Cannot install "ATI/AMD proprietary FGLRX graphics driver" (SystemError)

    - by Fisherman John
    I have just installed a fresh copy of 12.04.1 64bit. I formatted my PC completely and enabled updates during the installation. After the installation was complete, I went on updating my software. However, when I wanted to install the additional drivers using the Additional Drivers tool (namely "ATI/AMD proprietary FGLRX graphics driver"), it gave me this error: SystemError: E:Unable to correct problems, you have held broken packages. The same error shows up if I try installing the post-release updates driver. Installing the drivers from the terminal results in this output: XXXXXX:~$ sudo apt-get install fglrx [sudo] password for XXXXX: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: fglrx : Depends: lib32gcc1 but it is not going to be installed Depends: libc6-i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages." I get this using "sudo apt-get update": http://pastebin.com/AWAtDXjY But "sudo apt-get install fglrx" still get me this error: http://pastebin.com/RYM55bVN & "sudo apt-get -f install fglrx" gives me this error: http://pastebin.com/xxekajvP Any help would be greatly appreciated. (Please note that I'm new to Linux, coming directly from Windows. I have tried Ubuntu twice or so before, but it was not for a long period of time. The drivers got installed smoothly the few times I've tried Ubuntu, but post-release updates never worked for me.) [I am going to work now, so I can only answer from my phone. Can't really test any new solutions you may give me until ~10 hours from now on. Maybe more.] @stonedsquirrel When I try to run that command, I get this error: "XXXXXX:~$ sudo apt-get install fglrx [sudo] password for XXXXX: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: fglrx : Depends: lib32gcc1 but it is not going to be installed Depends: libc6-i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages." ie. I get the same error. ( I am Fisherman John, dunno how to login & thereby respond to your comment again _ )

    Read the article

  • What is the standard technique for shifting the frames of a sprite according to user input?

    - by virtual__
    From my own experience, I developed two techniques for changing the sprites of a character that's reacting to user input -- this in the context of a classic 2D platformer. The first one is to store all character's pixmaps in a list, putting the index of the currently used pixmap in an ordinary variable. This way, every time the player presses a key -- say the right arrow for moving the character forward -- the graphics engine sees what's the next pixmap to draw, draws it, and increments the index counter. That's a pretty common approach I believe, the problem is that in this case the animation's quality depends not only on the number of sprites available but also on how often your engine listens to user input. The second technique is to actually play an animation every key press event. For this you can use any sort of animation framework you want. It's only necessary to set the timer, the animation steps and to call the animation's play() method on your key press event handler. The problem with that approach is that is lacks responsiveness, since the character won't react to any input while the current animation is still being played. What I want to know is whether you are using one of these techniques -- or something similar -- in your games, or whether there's a standard method for animating sprites out there that's widely known by everybody but me.

    Read the article

  • Trying to make a game with C++, using lists to store bullets and enemies, but they are not erased

    - by XD_dued
    I've been trying to make a pretty simple space shooter game with C++, and I have been having a lot of trouble trying to use lists to store enemies and bullets. Basically I followed the post here almost exactly to store my bullets: SDL Bullet Movement I've done something similar to store my enemies. However, when I call bullets.erase(it++), for some reason the bullet is not erased. When the bullet movement is run for the next frame, it tries to re delete the bullet and segfaults the program. When I comment out the erase line, it runs fine, but the bullets are then never erased from the list... Is there any reason why the elements of the list aren't being deleted? I also set it up to print the number of elements in the list for every iteration, and it does not go down after deleting. Thanks! EDIT: Here's the specific code I'm using to store my enemies and having them act: std::list<Grunt*> doGrunts(std::list<Grunt*> grunts) { for(std::list<Grunt*>::iterator it = grunts.begin(); it != grunts.end();) { if((*it)->getHull() == 0) { delete * it; grunts.erase(it++); } else { (**it).doUnit(grunts, it); ++it; } } } Grunt is my enemy class, and the list grunts is a global variable. Is that my problem? When I pass the global into the function it becomes local? I assumed lists would be a reference type so thought this wouldn't be a problem. Sorry if this was a dumb question, I'm very new to C++, this is the first major thing I'm working on.

    Read the article

  • Engineered Systems and PCI

    - by Joel Weise
    Oracle has a number of different engineered systems.  These are design to be highly integrated, optimized and secure systems.  The Exadata database engineered system and the Exalogic application engineered system are two good examples.  Often I am asked how these comply with different standards and regulations.  Exalogic is the Oracle engineered system that supports applications and the focus of today's blog.  First, we must recognize that as a collection of hardware and software, we cannot simply state that Exalogic is "compliant" with PCI DSS.  This is because Exalogic must be implemented within the context of one's existing IT infrastructure, the security features of that infrastructure, the governance framework that exists, security policies, operational procedures, and other factors.  What we can say though, is that Exalogic has been designed with various security capabilities that can be utilized to support compliance to PCI DSS as well as other standards and regulations (e.g., NIST and HIPAA).  Given that, Exalogic can be an excellant platform for running PCI related payment applications.  Coalfire Systems, a leading QSA in the US, has evaluated Exalogic against PCI DSS and supports this position.  Their evaluation can be found here: Exalogic and PCI Compliance. I hope you find it useful. 

    Read the article

  • Java EE 7 Survey Results!

    - by reza_rahman
    As you know, the Java EE 7 EG recently posted a survey to gather broad community feedback on a number of critical open issues. For reference, you can find the original survey here. Over 1,100 developers took time out of their busy lives to let their voices be heard! The results were just posted to the Java EE EG. The exact summary sent to the EG is available here. We would like to take this opportunity to thank each and every one the individuals who took the survey. It is very appreciated, encouraging and worth it's weight in gold. In particular, I tried to capture just some of the high-quality, intelligent, thoughtful and professional comments in the summary to the EG. I highly encourage you to continue to stay involved, perhaps through the Adopt-a-JSR program. We would also like to sincerely thank java.net, JavaLobby, TSS and InfoQ for helping spread the word about the survey. In addition, many thanks to independent Java EE 7 expert group member Markus Eisele for blogging the results. You can read more details about the results here.

    Read the article

  • Are there HP Infinihost III ex drivers for windows available?

    - by Matt
    I picked up some infiniband cards off ebay for development/testing purposes. I was hoping that there would be some drivers available under windows 7, but it doesn't seem to be recognised by the OFED software which would seem to contain windows drivers. They are however immediately picked up by Ubuntu and drivers are loaded. Are these cards supported under windows 7 at all? mstflint under linux reports: Image type: Failsafe FW Version: 4.8.930 I.S. Version: 1 Device ID: 25208 Chip Revision: A0 Description: Node Port1 Port2 Sys image GUIDs: 001a4bffff0c9374 001a4bffff0c9375 001a4bffff0c9376 001a4bffff0c9377 Board ID: (HP_0060000001) VSD: PSID: HP_0060000001 From what I can tell they are the following: HP IB 4X DDR PCI-e DUAL PORT HCA (HP part number 409376-B21)

    Read the article

  • All-on-one-page print view in Plone

    - by Kev
    We have a Plone 4 document that is in a hierarchy. At each node there's either a document, or a folder. Folders then have more documents and folders. We want to be able to print the entire hierarchy, which means rendering the whole thing on one page. I see a number of web sites like this one that seem to have something like this. Is it manually done or is there some add-on I can get to make this feature possible?

    Read the article

  • How can I malloc an array of struct to do the following using file operation?

    - by RHN
    How can malloc an array of struct to do the following using file operation? the file is .txt input in the file is like: 10 22 3.3 33 4.4 I want to read the first line from the file and then I want to malloc an array of input structures equal to the number of lines to be read in from the file. Then I want to read in the data from the file and into the array of structures malloc. Later on I want to store the size of the array into the input variable size. return an array.After this I want to create another function that print out the data in the input variable in the same form like input file and suppose a function call clean_data will free the malloc memory at the end. I have tried somthing like: struct input { int a; float b,c; } struct input* readData(char *filename,int *size); int main() { return 0; } struct input* readData(char *filename,int *size) { char filename[] = "input.txt"; FILE *fp = fopen(filename, "r"); int num; while(!feof(fp)) { fscanf(fp,"%f", &num); struct input *arr = (struct input*)malloc(sizeof(struct input)); } }

    Read the article

  • Resume on 30 Days of SharePoint

    Dear readers, as you might have noticed... It was an organisational desaster on my end! Even though I continued my studies and research on Microsoft SharePoint 2013 during the last 30 days, I wasn't able to write an article a day to keep you posted on my progress. Nonetheless, I gathered a good number of additional blogs, mainly SharePoint MVP sites, and online forums which will be helpful in the next couple of weeks while I'm actually going to develop a C#-based client which will enable an existing 'legacy' application to SharePoint as a document management system (DMS) besides other already existing solutions. Finding excuses Well, no. Not really. I simply didn't block any or enough time every day to write down my progress during my own challenge. My log book on learning about SharePoint stands at 41 hours and 15 minutes during this month. Which means that I spent an average of more than 1 hour per day on getting into SharePoint. I know that might sound a little bit low but also keep in mind that I went for the challenge on top of my daily job and private responsibilities. During the same period there had been two priority 0 incidents from clients - external root cause - which took presedence over this leisure project. More to come Anyway, it was a first trial and despite the low level of reporting on my blog, I'm confident about what I learned during the last 30 days, and I'm ready to implement the client's requirements. At least, I would say that I have a better understanding about the road map or the path to walk during the next month. As time and secrecy allows I'm going to note down some bits and pieces... During the process of development, I'm going to 'cheat' on the challenge summary article and add links to those new entries. Just for the sake of completeness. Next challenge? Hmm, there had been ideas during the last meetup of the Mauritius Software Craftsmanship Community (MSCC) regarding certifications in IT and eventually we might organise some kind of a study group for specific exams, most probably Microsoft exams towards MCSD Web Developer or Windows Developer.

    Read the article

  • Packaging MATLAB (or, more generally, a large binary, proprietary piece of software)

    - by nfirvine
    I'm trying to package MATLAB for internal distribution, but this could apply to any piece of software with the same architecture. In fact, I'm packaging multiple releases of MATLAB to be installed concurrently. Key things Very large installation size (~4 GB) Composed of a core, and several plugins (toolboxes) Initially, I created a single "source" package (matlab2011b) that builds several .debs (mainly matlab2011b-core and matlab2011b-toolbox-* for each toolbox). The control file is just the standard all: dh $@ There is no Makefile; only copying files. I use a number of debian/*.install files to specify files to copy from a copy of an installation to /usr/lib/. The problem is, every time I build the thing (say, to make a correction to the core package), it recopies every file listed in the *.install file to e.g debian/$packagename/usr/ (the build phase), and then has to bundle that into a .deb file. It takes a long time, on the order of hours, and is doing a lot of extra work. So my questions are: Can you make dh_install do a hardlink copy (like cp -l) to save time? (AFAICT from the man page, no.) Maybe I should just get it to do this in the Makefile? (That's gonna b e big Makefile.) Can you make debuild only rebuild .debs that need rebuilding? Or specify which .debs to rebuild? Is my approach completely stupid? Should I break each of the toolboxes into its own source package too? (I'll have to do some silly templating or something, because there's hundreds of them. :/)

    Read the article

  • Smarter Search Results in NetBeans IDE 7.2

    - by Geertjan
    After you search your code using NetBeans IDE (using Ctrl-F for "Find" or Ctrl-H for "Replace"), you see the Search Results window, which looks like this: At least, the above is how it looks in NetBeans IDE 7.2. Before that, you didn't have all those extra columns (which can be displayed in the Search Results window after clicking the small button top right in the view) and you also didn't have the quick search (which is invoked by typing directly into the Search Results window), as can be seen here: So, the Search Results window now provides a lot more info than before. Being able to know the path to a file I've found, as well as the last modification date, file size, and the number of matches within the file, is useful at the end of a search process. In the NetBeans IDE 7.2 New & Noteworthy, the above changes are described in the Utilities section, as well as in the Quick Search in OutlineView section, where you can read that these are generic solutions that can be used in your own OutlineViews. Other OutlineViews in NetBeans IDE 7.2, such as the Debugger window, now also have these new features. A related article worth reading is Beefed Up Code Navigation Tools in NetBeans IDE 7.2. 

    Read the article

  • TechEd 2012: MVVM In XAML

    - by Tim Murphy
    Paul Sheriff was a real character at the start of his MVVM in XAML session.  There was a lot of sarcasm and self deprecation going on prior to the .  That is never a bad way to get things rolling right after lunch.  Then things got semi-serious. The presentation itself had a number of surprises, but not all of them had to do with XAML.  When he flipped over his company’s code generation tool it took me off guard.  I am used to generator that create code for a whole project, but his tools were able to create different types of constructs on demand.  It also made it easier to follow what he was doing than some of the other demos I have seen this week where people were using code snippets. Getting to the heart of the topic I found myself thinking that I may have found my utopia for application development in MVVM.  Yes, I know there is no such thing, but this comes closer than any other pattern I have learned about.  This pattern allows the application to have better separation of concerns than I have seen before.  This is especially true since you can leverage data binding.  I’m not sure why it has taken me so long to find time for this subject. As Paul demonstrated using this pattern with XAML gives you multi-platform reusable code when you leverage common utility classes and ModelView classes.  The one drawback I see is that you have to go to the lowest common denominator between the platforms you want to support, but you always have to weigh the trade offs. And finally, the Visual Studio nuggets just keep coming.  Even though it has been available for several generations of Visual Studio I have never seen someone use linked files within a solution.  It just goes to show that I should spend more time exploring the deeper features of each dialog. del.icio.us Tags: TechEd,TechEd 2012,MVVM,Paul Sheriff,Patterns,Visual Studio 2012

    Read the article

  • Fix Fatal Error Condition showing system path

    - by JMC
    I've noticed there are a large number of servers running Magento Commerce that will return a fatal error showing the system path: Fatal error: Uncaught exception 'Exception' with message 'File '/usr/local/www/magento/data1702/media/css' does not exists.' in /usr/local/www/magento/data1702/lib/Varien/File/Transfer/Adapter/Http.php:96 Stack trace: #0 /usr/local/www/magento/data1702/get.php(205): Varien_File_Transfer_Adapter_Http->send('/usr/local/www/...') #1 /usr/local/www/magento/data1702/get.php(165): sendFile('/usr/local/www/...') #2 {main} thrown in /usr/local/www/magento/data1702/lib/Varien/File/Transfer/Adapter/Http.php on line 96 Magento as an application is generally good about supressing error messages. How can a linux server running apache be configured to avoid returning this error message since the app has problems suppressing it.

    Read the article

  • Dealing with numerous, simultaneous sounds in unity

    - by luxchar
    I've written a custom class that creates a fixed number of audio sources. When a new sound is played, it goes through the class, which creates a queue of sounds that will be played during that frame. The sounds that are closer to the camera are given preference. If new sounds arrive in the next frame, I have a complex set of rules that determines how to replace the old ones. Ideally, "big" or "important" sounds should not be replaced by small ones. Sound replacement is necessary since the game can be fast-paced at times, and should try to play new sounds by replacing old ones. Otherwise, there can be "silent" moments when an old sound is about to stop playing and isn't replaced right away by a new sound. The drawback of replacing old sounds right away is that there is a harsh transition from the old sound clip to the new one. But I wonder if I could just remove that management logic altogether, and create audio sources on the fly for new sounds. I could give "important" sounds more priority (closer to 0 in the corresponding property) as opposed to less important ones, and let Unity take care of culling out sound effects that exceed the channel limit. The only drawback is that it requires many heap allocations. I wonder what strategy people use here?

    Read the article

  • Ubuntu 12.04 still slow at mounting internal filesystem

    - by Matthew Goson
    I'm using Toshiba laptop with this configuration: - CPU: Core i5, 2.4GHz - RAM: 4GB - Graphics card: Intel - Hard disk: 500GB SATA I installed Ubuntu 12.04 64bit and got the same issue with this guy Very slow boot due to mounting filesytem, I tried all suggestions there but the slow boot issue still here. Here's a part of my dmesg: [ 2.041015] usbhid: USB HID core driver [ 2.101378] usb 1-1.6: new full-speed USB device number 5 using ehci_hcd [ 2.137980] atl1c 0000:04:00.0: version 1.0.1.0-NAPI [ 2.779080] EXT4-fs (sda2): mounted filesystem with ordered data mode. Opts: (null) [ 22.822597] udevd[381]: starting version 175 [ 22.837954] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 22.850837] lp: driver loaded but no devices found [ 23.003822] Adding 7079096k swap on /dev/sda7. Priority:-1 extents:1 across:7079096k [ 23.407915] mei: module is from the staging directory, the quality is unknown, you have been warned. [ 23.408153] mei 0000:00:16.0: PCI->APIC IRQ transform: INT A -> IRQ 16 [ 23.408160] mei 0000:00:16.0: setting latency timer to 64 [ 23.408211] mei 0000:00:16.0: irq 44 for MSI/MSI-X [ 23.433196] [drm] Initialized drm 1.1.0 20060810 Additional information: my sda1 is a primary NTFS partition, sda2 is a primary ext4 partition which I installed Ubuntu onto. Other partitions are inside an extended partition.

    Read the article

  • Looking for temporal upsampling software

    - by timday
    I'm looking for something (prereably FOSS software) which can take an animation with N images as input, and which will output an animation with M frames, where M is in the range 2N to 5N or so. I believe the general technique is called "temporal upsampling" or possibly "inbetweening" (or "'tweening" for short). Note that it does need to make some effort to do motion tracking of things in the scene ("optical flow"); just fading ("dissolve") between keyframes isn't going to cut it. Googling "temporal upsampling" turns up any number of papers on the subject, but I've yet to discover any code/software I can just use to try the technique out. Any suggestions ?

    Read the article

  • SOA: Simplifying Cloud, Mobile, and On-premise Integration–Webcast October 24th 2013

    - by JuergenKress
    Proliferation of mobile devices, data explosion, and cloud enablement has caused a dramatic shift in IT. Organizations need to rethink their application infrastructures to accommodate increased processing speeds, heightened security and availability concerns for their applications, all while meeting lowered total cost of ownership. Traditional infrastructures may not be sufficient to accommodate the diversity and complexity of integrations in this new era. Many of today’s IT organizations rely on a Service Oriented Architecture (SOA) backbone to keep their businesses running. SOA adoption and acceptance across industries have led to platform maturity at the application layer level. However, we are at the start of an era where there is a new modus operandi for organizations to thrive and deliver continuously on competitive differentiation. This change is a result of market globalization, explosion in the number of mobile devices, unparalleled growth in voluminous data and innovation that crosses organizational boundaries. Social, mobile, cloud are terms that are revolutionizing the way organizations operate. Oracle SOA Suite is a hot-pluggable software suite to build, deploy and manage Service-Oriented Architectures (SOA).Oracle SOA transforms complex application integration into agile and reusable service-based connectivity by mediating, routing, and managing interactions between services and applications in the enterprise and in the cloud. Oracle SOA Suite's hot-pluggable architecture helps businesses lower upfront costs by allowing maximum re-use of existing IT investments and assets. Join us on this webcast to find out how you can optimize the use of Oracle SOA Suite, simplifying integration, and what does the next generation of SOA has to offer to you. Agenda: What's new in Oracle SOA Simplifying integration Application Integration and SOA Cloud integration with SOA Mobile Integration leveraging Oracle SOA Suite Oracle Delivers on Next Generation SOA Customer Examples Summary and Q&A Webcast Thursday October 24th, 2013 10am CET (8am UTC / 11am EEST)Details at the Registration Page SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: cloud integration,mobile integration,training,webcast middeware,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Life Is Full Of Changes (Part 1)

    - by Brian Jackett
    Today will be my last day with Sogeti.  I’ve been with Sogeti USA for just over 4 years.  In that time I’ve gotten to work on some great projects, develop relationships with some brilliant and passionate people, participate in the .Net developer and SharePoint communities, and grow my skills in a number of areas I’m passionate about.     As with all good things they must come to an end though.  I’ve accepted a position with another company and will provide more details once the transition has completed.  This decision was a difficult one to make but it provides a great career opportunity on many levels.  As much as my new schedule allows I plan to continue participating in local user groups, speaking at conferences, and blogging.     Speaking of which, you may have noticed my reduced blogging activity in the past few months.  In addition to a career change I’m also in the process of moving to a new residence (only a few miles from my current residence, so I’ll still be in Columbus.)  Searching for a new place, filling out paperwork, and all of the other work associated with this move has taken away a good chunk of the time I used to devote to blogging.  Once everything gets settled out with the move and job change I’ll re-evaluate how much time I can devote to blogging.     A big thanks to Sogeti and everyone who has been so supportive over my time with them.  It’s hard to move on, but I am excited for the prospects that the future will bring.         -Frog Out

    Read the article

  • About C# objects and the possibilties it has

    - by user527825
    As a novice programmer and I always wonder about c# capabilities. I know it is still early to judge that but all i want to know is can c# do complex stuffs or something outside windows OS. I think C# is a proprietary language (I don't know if I said that right) meaning you can't do it outside Visual Studio or Windows. Also you can't create your own controller (called object right?) like you are forced to use these available in toolbox and their properties and methods. Can C# be used with OpenGL API or DirectX API Finally it always bothers me when I think I start doing things in Visual Studio, I know it sounds arrogant to say but sometimes I feel that I don't like to be forced to use something even if its helpful, like I feel (do I have the right to feel?) that I want to do all things by myself? Don't laugh I just feel that this will give me a better understanding. Is Visual C# like using MaxScript inside 3ds max in that C# is exclusive to do Windows and Forms and Components that are Windows related and maxscript is only for 3d editing and manipulation for various things in the software. If it is too difficult for a beginner I hope you don't answer the fourth question as I don't have enough motivation and I want to keep the little I have. Note: Sorry for my English, I am self taught and never used the language with native speakers so expect so errors. I have a lot of questions regarding many things, what is the daily ratio you think for asking (number of questions) that would not bother the admins of the site and the members here. Thank you for your time.

    Read the article

  • Combine several locations with regex in nginx

    - by AlexAtNet
    I dynamic number of Joomla installations in subfolders of the domain. For example: http://site/joomla_1/ http://site/joomla_2/ http://site/joomla_3/ ... Currently I have the follwing config that works: index index.php; location / { index index.php index.html index.htm; } location /joomla_1/ { try_files $uri $uri/ /joomla_1/index.php?q=$uri&$args; } location /joomla_2/ { try_files $uri $uri/ /joomla_2/index.php?q=$uri&$args; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm/joomla.sock; ... } I'm trying to combine joomla_N rules in one: location ~ ^/(joomla_[^/]+)/ { try_files $uri $uri/ /$1/index.php?q=$uri&$args; } but server starts to return index.php as is (does not call the php-fpm). It looks like the nginx stops the processing of the regex rules after the first match. Is there any way to combine this rules with something like regex?

    Read the article

  • How to optimize a box2d simulation in action game?

    - by nathan
    I'm working on an action game and i use box2d for physics. The game use a tiled map. I have different types of body: Static ones used for tiles Dynamic ones for player and enemies Actually i tested my game with ~150 bodies and i have a 60fps constantly on my computer but not on my mobile (android). The FPS drop as the number of body increase. After having profiled the android application, i saw that the World.step took around 8ms in CPU time to execute. Here are few things to note: Not all the world is visible on screen, i use a scrolling system Enemies are constantly moving toward the player so there is alaways to force applied to their body Enemies need to collide between each others Enemies collide with tiles I also now that i can active/desactive or sleep/awake bodies. Considering the fact that only a part of the enemies are possibly displayed on screen, is there any optimizations i can do to reduce the execution time of box2d simulation? I found a guy trying an optimization based on distance of enemies from the player (link). But i seems like he just desactives far bodies (in my case, i could desactive bodies that are not visible). But my enemies need to move even when they are not visible on screen, and applying forces will not workd on inactive bodies. Should i play with sleeping bodies here? Also, enemies are composed by two fixtures and are constantly colliding with each others and with tiles but i really never need to get notified about that. Is there anything i can do to optimize this kind of scenario? Finally, am i wrong to try to run simulation at 60FPS on mobile and should i try to make it run at 30FPS?

    Read the article

  • Expanding RAID-5

    - by Garry
    I'm new to RAID and trying to get my head around things. I have owned a Drobo in the past (which I liked) but it failed. Here's a hypothetical scenario: Assume I set up a RAID-5 array consisting of four 1TB hot-swappable 2.5" SATA drives. I name this volume 'My Data'. By my calculations, that would give me 2.7TB of usable space and the ability to recover if a single drive fails. I have a few questions: What happens if I pull out a single 1TB drive and replace it with a 2TB drive? Would the array automatically rebuild itself with no issues? Would the maximum capacity remain 2.7TB? If number (1) above is true and the array rebuilds itself with three 1TB drives one 2TB drive what would happen if I then pulled another 1TB drive out and stuck in a 2TB drive (you can see where I'm going here can't you). Would I eventually be able to gain more storage by gradually adding bigger drives? From a practical point of view, how much input is required from me as the end user whilst these drives are being pulled out and put in? On the Drobo, the storage space just automagically handles itself. Would I have to be actively involved in telling Ubuntu what was going on or would any of it be automated? Thanks in advance,

    Read the article

< Previous Page | 737 738 739 740 741 742 743 744 745 746 747 748  | Next Page >