Search Results

Search found 18191 results on 728 pages for 'single board'.

Page 249/728 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • Why not Green Threads?

    - by redjamjar
    Whilst I know questions on this have been covered already (e.g. http://stackoverflow.com/questions/5713142/green-threads-vs-non-green-threads), I don't feel like I've got a satisfactory answer. The question is: why don't JVM's support green threads anymore? It says this on the code-style Java FAQ: A green thread refers to a mode of operation for the Java Virtual Machine (JVM) in which all code is executed in a single operating system thread. And this over on java.sun.com: The downside is that using green threads means system threads on Linux are not taken advantage of and so the Java virtual machine is not scalable when additional CPUs are added. It seems to me that the JVM could have a pool of system processes equal to the number of cores, and then run green threads on top of that. This could offer some big advantages when you have a very number large of threads which block often (mostly because current JVM's cap the number of threads). Thoughts?

    Read the article

  • Six Rubens’ Tubes Combined Into a Fire-Based Music Visualizer [Video]

    - by Jason Fitzpatrick
    The last Rubens’ Tube setup we shared with you was but a simple single tube. This impressive setup is six independent tubes that register distinct frequencies of sound in a musical composition as standing flames. Check out the video to see it in action. Curious about the Rubens’ Tubes? Read more about the phenomenon here. [via Design Boom] 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • UK Connected Systems User Group - Udi Dahan Event Topic change

    - by Michael Stephenson
    Hi Just wanted to get the word out about a change to the may user group event.  Udi Dahan will present a new topic which he has not presented in the UK before.  Details below. To register for this event please refer to: http://ukconnectedsystemsusergroup.org/UpcomingEvents.aspx Title: High Availability - A Contrarian View   Abstract: Many developers are aware of the importance of high availability, critically analyzing any single points of failure in the infrastructure. Those same developers rarely give a second thought to the periods of time when a system is being upgraded. Even if all the servers are running, most systems cannot function in-between versions. Yet with the increased pace of business, users are demanding ever more frequent releases. The poor maintenance programmers and system administrators are left holding the bag long after the architecture that sealed their fate was formulated. Join Udi for some different perspectives on high availability - architecture and methodology for the real world.

    Read the article

  • Doing two Declarative Operations with One Button

    - by shay.shmeltzer
    You can file the below video under "things that get asked on OTN a lot". With ADF it is very easy to drag an operation to a page to create a button that activate it. But what if you want a single button to invoke two operations? For example have a button that does a "Delete" as well as a "Commit". The way to do it is to add an action binding, and then overwrite the button function in a backing bean to call the additional action. The nice thing is that JDeveloper will create all the binding code for you in the backing bean - all you need to do is duplicate it. Here is a quick demo:

    Read the article

  • How to "un-automount" external harddrive?

    - by Timon
    So I dualboot 12.10 and Win7. Both OSs are on the primary SSD while all commonly used data (documents, movies, music, profiles etc) is on a secondary NTFS-formatted HDD. Since I needed the NTFS drive to automatically mount in Ubuntu right at startup, I downloaded ntfs-config and set it to automount my NTFS drive. Problem is, I also accidentally told it to automount my external hard drive (which is also NTFS formatted). When booting up Ubuntu, it now checks for the presence of that drive every single time, which is getting annoying 'cause I don't always have it connected. I've tried un- and reinstalling ntfs-config, telling it to not automount the external HD, but to no avail. Any suggestions?

    Read the article

  • Objective-c Cocos2d moving a sprite

    - by marcg11
    I hope someone knows how to do the following with cocos2d: I want a sprite to move but not in a single line by using [cocosGuy runAction: [CCMoveTo actionWithDuration:1 position:location]]; What I want is the sprite to do some kind of movements that I preestablish. For example in some point i want the sprirte to move for instance up and then down but in a curve. Do I have to do this with flash like this documents says? http://www.cocos2d-iphone.org/wiki/doku.php/prog_guide:animation Does animation in this page means moving sprites or what? thanks

    Read the article

  • Oracle saves with Oracle Database 11g and Advanced Compression

    - by jenny.gelhausen
    Oracle Corporation runs a centralized eBusiness Suite system on Oracle Database 11g for all its employees around the globe. This clustered Global Single Instance (GSI) has scaled seamlessly with many acquisitions over the years, doubling the number of employees since 2001 and supporting around 100,000 employees today, 24 hours a day, 7 days a week around the world. In this podcast, you'll hear from Raji Mani, IT Director for Oracle's PDIT Group, on how Oracle Database 11g and Advanced Compression is helping to save big on storage costs. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Why lock-free data structures just aren't lock-free enough

    - by Alex.Davies
    Today's post will explore why the current ways to communicate between threads don't scale, and show you a possible way to build scalable parallel programming on top of shared memory. The problem with shared memory Soon, we will have dozens, hundreds and then millions of cores in our computers. It's inevitable, because individual cores just can't get much faster. At some point, that's going to mean that we have to rethink our architecture entirely, as millions of cores can't all access a shared memory space efficiently. But millions of cores are still a long way off, and in the meantime we'll see machines with dozens of cores, struggling with shared memory. Alex's tip: The best way for an application to make use of that increasing parallel power is to use a concurrency model like actors, that deals with synchronisation issues for you. Then, the maintainer of the actors framework can find the most efficient way to coordinate access to shared memory to allow your actors to pass messages to each other efficiently. At the moment, NAct uses the .NET thread pool and a few locks to marshal messages. It works well on dual and quad core machines, but it won't scale to more cores. Every time we use a lock, our core performs an atomic memory operation (eg. CAS) on a cell of memory representing the lock, so it's sure that no other core can possibly have that lock. This is very fast when the lock isn't contended, but we need to notify all the other cores, in case they held the cell of memory in a cache. As the number of cores increases, the total cost of a lock increases linearly. A lot of work has been done on "lock-free" data structures, which avoid locks by using atomic memory operations directly. These give fairly dramatic performance improvements, particularly on systems with a few (2 to 4) cores. The .NET 4 concurrent collections in System.Collections.Concurrent are mostly lock-free. However, lock-free data structures still don't scale indefinitely, because any use of an atomic memory operation still involves every core in the system. A sync-free data structure Some concurrent data structures are possible to write in a completely synchronization-free way, without using any atomic memory operations. One useful example is a single producer, single consumer (SPSC) queue. It's easy to write a sync-free fixed size SPSC queue using a circular buffer*. Slightly trickier is a queue that grows as needed. You can use a linked list to represent the queue, but if you leave the nodes to be garbage collected once you're done with them, the GC will need to involve all the cores in collecting the finished nodes. Instead, I've implemented a proof of concept inspired by this intel article which reuses the nodes by putting them in a second queue to send back to the producer. * In all these cases, you need to use memory barriers correctly, but these are local to a core, so don't have the same scalability problems as atomic memory operations. Performance tests I tried benchmarking my SPSC queue against the .NET ConcurrentQueue, and against a standard Queue protected by locks. In some ways, this isn't a fair comparison, because both of these support multiple producers and multiple consumers, but I'll come to that later. I started on my dual-core laptop, running a simple test that had one thread producing 64 bit integers, and another consuming them, to measure the pure overhead of the queue. So, nothing very interesting here. Both concurrent collections perform better than the lock-based one as expected, but there's not a lot to choose between the ConcurrentQueue and my SPSC queue. I was a little disappointed, but then, the .NET Framework team spent a lot longer optimising it than I did. So I dug out a more powerful machine that Red Gate's DBA tools team had been using for testing. It is a 6 core Intel i7 machine with hyperthreading, adding up to 12 logical cores. Now the results get more interesting. As I increased the number of producer-consumer pairs to 6 (to saturate all 12 logical cores), the locking approach was slow, and got even slower, as you'd expect. What I didn't expect to be so clear was the drop-off in performance of the lock-free ConcurrentQueue. I could see the machine only using about 20% of available CPU cycles when it should have been saturated. My interpretation is that as all the cores used atomic memory operations to safely access the queue, they ended up spending most of the time notifying each other about cache lines that need invalidating. The sync-free approach scaled perfectly, despite still working via shared memory, which after all, should still be a bottleneck. I can't quite believe that the results are so clear, so if you can think of any other effects that might cause them, please comment! Obviously, this benchmark isn't realistic because we're only measuring the overhead of the queue. Any real workload, even on a machine with 12 cores, would dwarf the overhead, and there'd be no point worrying about this effect. But would that be true on a machine with 100 cores? Still to be solved. The trouble is, you can't build many concurrent algorithms using only an SPSC queue to communicate. In particular, I can't see a way to build something as general purpose as actors on top of just SPSC queues. Fundamentally, an actor needs to be able to receive messages from multiple other actors, which seems to need an MPSC queue. I've been thinking about ways to build a sync-free MPSC queue out of multiple SPSC queues and some kind of sign-up mechanism. Hopefully I'll have something to tell you about soon, but leave a comment if you have any ideas.

    Read the article

  • Mobile Development- Obtaining development hardware - best practices?

    - by Zoot
    I'm looking to get into smartphone development, but there a quite a few options out there for platforms right now. (iOS/Android/WebOS/Bada/Symbian/MeeGo/WindowsMobile/JavaME) I'd like to have development hardware to test my code and the overall functionality of the devices. What is the best way to obtain and/or borrow hardware for development and testing? Are there rules of thumb to follow which apply to all companies and platforms? In this situation, I'm a single developer. Does this process change for a startup? A hackerspace? A small business? A large business?

    Read the article

  • how to cause linux system datetime to run faster than real world datetime?

    - by JamesThomasMoon1979
    Background I want to monitor a running linux system over several days. It's a custom gentoo build and with much custom software on board. This software has ongoing maintenance timers and cron scripts and other clock driven events. I need to verify these scheduled events are working. Problem Waiting for the system to step through daily and weekly activity is a long wait time. And modifying all clock-based timers on the system would be time consuming. Yet, I often want to test a system's end-to-end scheduled activities without waiting a week. Potential Solution Have the linux system under test appear to run through it's daily cycle of activity within just a few hours. My Question for Serverfault Is there a way to cause the system's time to run faster than real world time? My first thought is manipulating the ntp daemon to repeatedly and smoothly increment the clock . Any other ideas? And yes, I know this may have strange side affects. However, the system has no important or time critical interactions with systems outside of itself. And this may be a valuable testing technique.

    Read the article

  • SharePoint 2013 Development Machine - Now Available

    - by Sahil Malik
    SharePoint 2010 Training: more information With SharePoint 2013 RTM’ing, I am thrilled to announce an updated version of my SharePoint 2013 Development Machine book’let. As you know, I am publishing many small booklets, and eventually I will publishing a single big book also – sort of the track/cd model. Also, this self-e-publish model allows me to keep the content updated as I learn more. There is a very minor portion at the end that is still pre-RTM. Specifically as of now SharePoint Designer 2013 and Visual Studio tools for SharePoint 2013 have not yet RTM’ed. However, installing those is not very different from Beta2. The screenshots may change a bit. I will of course update the book soon as the RTM bits are available. Read full article ....

    Read the article

  • Entity/Component based engine rendering separation from logic

    - by Denis Narushevich
    I noticed in Unity3D that each gameObject(entity) have its own renderer component, as far I understand, such component handle rendering logic. I wonder if it is a common practice in entity/component based engines, when single entity have renderer components and logic components such as position, behavior altogether in one box? Such approach sound odd to me, in my understanding entity itself belongs to logic part and shouldn't contain any render specific things inside. With such approach it is impossible to swap renderers, it would require to rewrite all that customized renderers. The way I would do it is, that entity would contain only logic specific components, like AI,transform,scripts plus reference to mesh, or sprite. Then some entity with Camera component would store all references to object that is visible to the camera. And in order to render all that stuff I would have to pass Camera reference to Renderer class and render all sprites,meshes of visible entities. Is such approach somehow wrong?

    Read the article

  • New SSD freezing on older motherboard (intel G31)

    - by DJM
    I have an ECS G31T-M motherboard running a Core2 Quad processor, 4gb RAM, Windows 7 32bit, Geforce 9600GT. Bought a Sandisk Extreme III 120GB (SDSSDX-120G-G25) and installed today. I'm outputting this from my 9600GT with included TV-out adapter to Component video (to my HDTV). This motherboard is SATA 2 and from what I can tell, SSDs run on the IDE controller and there is nothing fancy to set up advanced features of SSDs. I've noticed on other forums (but not verifying with ECS) this board does not support AHCI. I have two versions of Windows 7 installed on two drives, the SSD and an old 500G disc drive. When booted from my older 500G HDD, video plays fine on the HDTV. When booting the SSD windows 7 install, I am freezing constantly, as in, video plays OK for a minute, then picture freezes for 1-3 minutes (sometimes as audio continues playing) and returns for 20-30 seconds before doing the same thing again. Other tasks such as basic maneuvering through file folders seems to be no problem. Please help!! Do I need a new system for this thing to work, or could there be other fixes? I updated firmware to R201 to no avail. DJM

    Read the article

  • S#arp Architecture 1.5.2 released

    - by AlecWhittington
    It has been a few weeks since S#arp Architecture 1.5 RTM has been released. While it was a major success a few issues were found that needed to be addressed. These mostly involved the Visual Studio templates. What's new in S#arp Architecture 1.5.2? Merged the SharpArch.* assemblies into a single assembly (SharpArch.dll) Updated both VS 2008 and 2010 templates to reflect the use of the merged assembly Updated SharpArch.build with custom script that allows the merging of the assemblies. Copys new merged...(read more)

    Read the article

  • How can I restore my original IME Romaji input settings?

    - by JOhn K
    the this one, Japanese IME on Windows: switch back to romaji input method (3) did not help. The problem seems the same. My Vista home premium version PC, I had been using Microsoft IME to use English and Japanese input using romaji henkan for a long time. One day, all of a sudden, first when I started up the PC, it has cap lock indicator ON. So, I press SHIFT key, CAP lock indicator is off!(This I have to do every morning.) Now when I want to type romaji input to change to Japanese, I switch EN English (United States) to "JP Japanese (Japan) and select input to hiragana input. It worked until that day. But now when I set to input romaji for hiragana as I used to do and start typing, then it shows Japanese hiragana directly on the display just as keyboard setting as Japanese ???109???????? as shown in Wikipedia JIS keyboard. And I cannot show hiragana as I wanted ( I can convert to Kanji OK) etc. by hitting space key. But its key board arrangement is what I never learned. Other thing I found is when I hit "`" key, it switches between hiragana and alphabet. When I see Control panel setting it is the same setting as I have seen. Please suggest me a solution to get the original setting for IME input mode as I used to do. John K.

    Read the article

  • Advice on SCRUM for the solitary developer [closed]

    - by ProfK
    Possible Duplicate: Agile for the Solo Developer I am looking for advice on the SCRUM process for a solitary developer. Most SCRUM resources I see focus on its use in a team environment, hence my question here. I'd like some guidance on structuring and managing my projects for SCRUM, with me as a solitary developer and business owner, but still occasionally including my clients for input and feedback. Areas I'm not clear on include resolving my backlog into 'sprintable' project areas and stories, defining user stories properly with a view to being digested by developer level users, defining feasible sprints for a single developer etc. Essentially I'm looking for advice on moving from using scrum in a team/office environment, with colleagues and project manager, and using chaos/cowboy-coding on my own, to assuming the role of PM myself and adopting scrum for work on my own. Any advice is welcome.

    Read the article

  • Issues with sustained traffic with PFSense

    - by Farseeker
    Last week we had to replace our PFSense firewall because it had a catastrophic hardware failure. All but one of the NICs were taken out of the old server and put into the new one. The one NIC that was not moved was the LAN NIC as this is on-board. The other NICs are all WAN connections and the must all be present (i.e. I can't disable one just for the sake of testing) After re-installing PFSense and restoring our backup of the configuration, everything came back online just fine, however on the new hardware any download that takes longer than about 10 seconds just times out in the middle. Example 1: Downloading from Microsoft.com goes at about 900k/sec and times out after about 10 seconds (thus, just under 10Mb of content) Example 2: Downloading from cnet.com goes at about 300k/sec and times out after about 10 seconds (thus, about 3Mb of content). By times out, I mean that the download just stops, and you have to pause/resume to get the next part done, repeat and rinse until the download is complete. However it's not consistant, sometimes it's 10 seconds, sometimes it's 4 seconds, and it sometimes you can't even load a heavy HTML page because the page never finishes. I assume this is most likely because PFSense does not like the onboard NIC, as this is the primary difference between the two servers. It's recognised as NFE0, and there's no room in the server for any more NICs and I don't have any dual-port NICs handy to experiment with a different LAN connection. I've never had to troubleshoot this sort of issue before. Can anyone give me some pointers about where to start? Linux is not my forte so please be kind!

    Read the article

  • Why does ATI 5570 HD video card driver installation cause Windows 7 To Blue Screen?

    - by Mort
    This one is for the hive mind. I have a brand new Dell Optiplex 760 workstation with 4 gigabytes of RAM running Windows 7 Professional (32bit). This is a new box with nothing installed other than what was provided for directly by Dell. I installed a Saphire ATI PCI Express 5570 HD. Upon trying to install the 10.4 Catalyst drivers the system will blue screen. It blue screens during the hardware detection phase of the installation process. I have already performed the following trouble shooting steps: Changed system RAM Installed only 2 gigabytes of RAM Installed different versions of Catalyst drivers (10.4 - 9.12) Tried to install video only component of driver (vs entire Catalyst suite) Made sure Windows 7 was fully updated Flashed mother board BIOS to current version Removed and re-seated video card Contacted ATI Support (We all know how this went......) Verified supply outputting properly The blue screen error (via Windows BugCheck entry in event log) is a 0x000000CA and refers to a plug and play error most likely caused by a bad driver. The problem is that the driver installation process never gets far enough to actually install a driver. The resolution center in Windows provides a solution of installing the 10.4 Catalyst driver to resolve issue (which fails). Looking for some alternate views to resolve.

    Read the article

  • Ubuntu 11.04:Add right click menu as "Compress as ZIP"

    - by Ananthavel Sundararajan
    Step 1: I wanted to Add a menu as "Compress as ZIP" on right click. I know i can use change default compress format as "ZIP" using gconf-editor. But I wanted to add a new Menu Item for Compressing as ZIP without opening any other option dialog. Step 2: I wanted to Compress a file as ZIP and Rename it as a "epub". Please let me know is it possible to zip&rename by adding single menu Item? Im using Ubuntu 11.04 and Installed "Nautilus-Action-Configurations", but unsuccessful. N.B. I have read this Ask Ubuntu Q&A; I dont want to open a new window to choose me the format. It should be straight away saved as ZIP.

    Read the article

  • How to make bash quit tab autocompleting hidden directories

    - by Kristopher Micinski
    Most of the time, I don't need autocompletes for my hidden directories. In fact, that's the point of them being hidden! However, annoyingly, bash takes these directories into account when considering tab autocompletion. This is particularly annoying when I have the following scenario: a .svn foler along with a single folder that I want to traverse into by simply pushing tab. (This typically comes up with deep Java packages...) Is there any way to change the default behavior? Worst case scenario I have to type '.' before tab, which seems like a no brainer for my usability.

    Read the article

  • DBCC CHECKDB (BatmanDb, REPAIR_ALLOW_DATA_LOSS) &ndash; Are you Feeling Lucky?

    - by David Totzke
    I’m currently working for a client on a PowerBuilder to WPF migration.  It’s one of those “I could tell you, but I’d have to kill you” kind of clients and the quick-lime pits are currently occupied by the EMC tech…but I’ve said too much already. At approximately 3 or 4 pm that day users of the Batman[1] application here in Gotham[1] started to experience problems accessing the application.  Batman[2] is a document management system here that also integrates with the ERP system.  Very little goes on here that doesn’t involve Batman in some way.  The errors being received seemed to point to network issues (TCP protocol error, connection forcibly closed by the remote host etc…) but the real issue was much more insidious. Connecting to the database via SSMS and performing selects on certain tables underlying the application areas that were having problems started to reveal the issue.  You couldn’t do a SELECT * FROM MyTable without it bombing and giving the same error noted above.  A run of DBCC CHECKDB revealed 14 tables with corruption.  One of the tables with issues was the Document table.  Pretty central to a “document management” system.  Information was obtained from IT that a single drive in the SAN went bad in the night.  A new drive was in place and was working fine.  The partition that held the Batman database is configured for RAID Level 5 so a single drive failure shouldn’t have caused any trouble and yet, the database is corrupted.  They do hourly incremental backups here so the first thing done was to try a restore.  A restore of the most recent backup failed so they worked backwards until they hit a good point.  This successful restore was for a backup at 3AM – a full day behind.  This time also roughly corresponds with the time the SAN started to report the drive failure.  The plot thickens… I got my hands on the output from DBCC CHECKDB and noticed a pattern.  What’s sad is that nobody that should have noticed the pattern in the DBCC output did notice.  There was a rush to do things to try and recover the data before anybody really understood what was wrong with it in the first place.  Cooler heads must prevail in these circumstances and some investigation should be done and a plan of action laid out or you could end up making things worse[3].  DBCC CHECKDB also told us that: repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB Yikes.  That means that the database is so messed up that you’re definitely going to lose some stuff when you repair it to get it back to a consistent state.  All the more reason to do a little more investigation into the problem.  Rescuing this database is preferable to having to export all of the data possible from this database into a new one.  This is a fifteen year old application with about seven hundred tables.  There are TRIGGERS everywhere not to mention the referential integrity constraints to deal with.  Only fourteen of the tables have an issue.  We have a good backup that is missing the last 24 hours of business which means we could have a “do-over” of yesterday but that’s not a very palatable option either. All of the affected tables had TEXT columns and all of the errors were about LOB data types and orphaned off-row data which basically means TEXT, IMAGE or NTEXT columns.  If we did a SELECT on an affected table and excluded those columns, we got all of the rows.  We exported that data into a separate database.  Things are looking up.  Working on a copy of the production database we then ran DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS and that “fixed” everything up.   The allow data loss option will delete the bad rows.  This isn’t too horrible as we have all of those rows minus the text fields from out earlier export.  Now I could LEFT JOIN to the exported data to find the missing rows and INSERT them minus the TEXT column data. We had the restored data from the good 3AM backup that we could now JOIN to and, with fingers crossed, recover the missing TEXT column information.  We got lucky in that all of the affected rows were old and in the end we didn’t lose anything.  :O  All of the row counts along the way worked out and it looks like we dodged a major bullet here. We’ve heard back from EMC and it turns out the SAN firmware that they were running here is apparently buggy.  This thing is only a couple of months old.  Grrr…. They dispatched a technician that night to come and update it .  That explains why RAID didn’t save us. All-in-all this could have been a lot worse.  Given the root cause here, they basically won the lottery in not losing anything. Here are a few links to some helpful posts on the SQL Server Engine blog.  I love the title of the first one: Which part of 'REPAIR_ALLOW_DATA_LOSS' isn't clear? CHECKDB (Part 8): Can repair fix everything? (in fact, read the whole series) Ta da! Emergency mode repair (we didn’t have to resort to this one thank goodness)   Dave Just because I can…   [1] Names have been changed to protect the guilty. [2] I'm Batman. [3] And if I'm the coolest head in the room, you've got even bigger problems...

    Read the article

  • Using Solaris pkg to list all setuid or setgid programs

    - by darrenm
    $ pkg contents -a mode=4??? -a mode=2??? -t file -o pkg.name,path,mode We can also add a package name on the end to restrict it to just that single package eg: $ pkg contents -a mode=4??? -a mode=2??? -t file -o pkg.name,path,mode core-os PKG.NAME PATH MODE system/core-os usr/bin/amd64/newtask 4555 system/core-os usr/bin/amd64/uptime 4555 system/core-os usr/bin/at 4755 system/core-os usr/bin/atq 4755 system/core-os usr/bin/atrm 4755 system/core-os usr/bin/crontab 4555 system/core-os usr/bin/mail 2511 system/core-os usr/bin/mailx 2511 system/core-os usr/bin/newgrp 4755 system/core-os usr/bin/pfedit 4755 system/core-os usr/bin/su 4555 system/core-os usr/bin/tip 4511 system/core-os usr/bin/write 2555 system/core-os usr/lib/utmp_update 4555 system/core-os usr/sbin/amd64/prtconf 2555 system/core-os usr/sbin/amd64/swap 2555 system/core-os usr/sbin/amd64/sysdef 2555 system/core-os usr/sbin/amd64/whodo 4555 system/core-os usr/sbin/prtdiag 2755 system/core-os usr/sbin/quota 4555 system/core-os usr/sbin/wall 2555

    Read the article

  • Examples of continuous integration workflow using git

    - by Andrew Barinov
    Can anyone provide a rough outline of their git workflow that complies with continuous integration. E.g. How do you branch? Do you fast forward commits to the master branch? I am primarily working with Rails as well as client and server side Javascript. If anyone can recommend a solid CI technology that's compatible with those, that'd be great. I've looked into Jenkins but would like to check out other good alternatives. To put some context into this, I am planning on transitioning from working as a single developer into working as part of the team. I'd like to start standardizing my own personal workflow so that I can onboard new devs quickly.

    Read the article

  • Torchlight II Drops Today; New Classes and Miles of Atmospheric Dungeon Crawling Await

    - by Jason Fitzpatrick
    Torchlight II, sequel to the extremely popular Torchlight action-RPG, is available for sale today. With four new classes and a massively expanded world, you’ll have plenty to explore. The new release features extra classes, extra companion creatures, in-game weather systems, and of course: updated graphics and a massively expanded game universe. Trumping all these additions, however, is LAN/internet co-op multiplayer–by far the feature most requested and anticipated by Torchlight fans. Check out the trailer video above to take a peak at the game, read more about it at the Torchlight II site, and then hit up the link below to grab a copy on Steam–you can pre-order it any time but it won’t be officially available for download until 2PM EST, today. Torchlight II is Windows-only, $19.99 for a single copy or $59.99 for a friend 4-pack (which includes a copy of Torchlight I). Torchlight II How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • HTG Explains: Understanding Routers, Switches, and Network Hardware

    - by Jason Fitzpatrick
    Today we’re taking a look at the home networking hardware: what the individual pieces do, when you need them, and how best to deploy them. Read on to get a clearer picture of what you need to optimize your home network. When do you need a switch? A hub? What exactly does a router do? Do you need a router if you have a single computer? Network technology can be quite an arcane area of study but armed with the right terms and a general overview of how devices function on your home network you can deploy your network with confidence. HTG Explains: Understanding Routers, Switches, and Network Hardware How to Use Offline Files in Windows to Cache Your Networked Files Offline How to See What Web Sites Your Computer is Secretly Connecting To

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >