Search Results

Search found 29473 results on 1179 pages for 'solaris 10'.

Page 738/1179 | < Previous Page | 734 735 736 737 738 739 740 741 742 743 744 745  | Next Page >

  • Transforming a primitive tetrahedron into a primitive icosahedron?

    - by Djentleman
    I've created a tetrahedron by creating a BoundingBox and building the faces of the tetrahedron within the bounding box as follows (see image as well): VertexPositionNormalTexture[] vertices = new VertexPositionNormalTexture[12]; BoundingBox box = new BoundingBox(new Vector3(-1f, 1f, 1f), new Vector3(1f, -1f, -1f)); vertices[0].Position = box.GetCorners()[0]; vertices[1].Position = box.GetCorners()[2]; vertices[2].Position = box.GetCorners()[7]; vertices[3].Position = box.GetCorners()[0]; vertices[4].Position = box.GetCorners()[5]; vertices[5].Position = box.GetCorners()[2]; vertices[6].Position = box.GetCorners()[5]; vertices[7].Position = box.GetCorners()[7]; vertices[8].Position = box.GetCorners()[2]; vertices[9].Position = box.GetCorners()[5]; vertices[10].Position = box.GetCorners()[0]; vertices[11].Position = box.GetCorners()[7]; What would I then have to do to transform this tetrahedron into an icosahedron? Similar to this image: I understand the concept but applying it is another thing entirely for me.

    Read the article

  • Why is my root filesystem always scanned at boot?

    - by luri
    I always have a pause at boot saying my filesystems are being checked (with a "press C to cancel" note, too). Actually (seeing boot.log) I think it's the / fs, which is located at /dev/sdb5 Several questions altoghether, here (hope this does not break any rule): Is this normal? Can I (or even should I) prevent this anyhow? According to boot.log (below) the fs does not seem to be 'clean', or, at least, it's in an state or condition that makes fsck always can it for errors for a while (just a few seconds). How can I fix it? Edit: This is my boot.log: fsck desde util-linux-ng 2.17.2 udevd[515]: can not read '/etc/udev/rules.d/z80_user.rules' /dev/sdb5: 249045/32841728 ficheros (0.3% no contiguos), 20488485/131338752 bloques init: ureadahead-other main process (1111) terminated with status 4 init: ureadahead-other main process (1116) terminated with status 4 Password: * Starting AppArmor profiles [160G Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox [154G[ OK ] * Setting sensors limits [160G [154G[ OK ] And this is dumpe2fs results for the filesystem being checked (well, the relevant part of the log): Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 32841728 Block count: 131338752 Reserved block count: 6566937 Free blocks: 110850356 Free inodes: 32592701 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Fri Dec 10 19:44:15 2010 Last mount time: Mon Feb 14 17:00:02 2011 Last write time: Mon Feb 14 16:59:45 2011 Mount count: 1 Maximum mount count: 33 Last checked: Mon Feb 14 16:59:45 2011 Check interval: 15552000 (6 months) Next check after: Sat Aug 13 17:59:45 2011 Lifetime writes: 331 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 28049496 Default directory hash: half_md4 Directory Hash Seed: d3d24459-514b-4413-b840-e970b766095b Journal backup: inode blocks Journal features: journal_incompat_revoke Tamaño de fichero de transacciones: 128M Journal length: 32768 Journal sequence: 0x0005e0c4 Journal start: 1 This is the relevant (at least I think this is the fs being checked) line in fstab: #Entry for /dev/sdb5 : UUID=42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 / ext4 errors=remount-ro 0 1

    Read the article

  • How to expose game data in the game without a singelton?

    - by zardon
    I'm quite new to cocos2d and games programming, and am currently I am writing a game that is currently in Prototype stage. Everything is going okay, but I've realized a potentially big problem and I am not sure how to solve it. I am using a singelton to store a bunch of arrays for everything, a global list of planets, a global list of troops, a global list of products, etc. And only now I'm realizing that all of this will be in memory and this is the wrong way to do it. I am not storing files or anything on the disk just yet, with exception to a save/load state, which is a capture of everything. My game makes use of a map which allows you to select a planet, then it will give you a breakdown of that planets troops and resources, Lets use this scenario: My game has 20 planets. On which you can have 20 troops. Straight away that's an array of 400! This does not add the NPC, which is another 10. So, 20x10 = 200 So, now we have 600 all in arrays inside a Singelton. This is obviously very bad, and very wrong. Especially as the game scales in the amount of data. But I need to expose pretty much everything, especially on the map page, and I am not sure how else to do it. I've been told that I can use a controller for the map page which has the information I need for each planet, and other controllers for other items I require global display for. I've also thought about storing each planet's data in a save file, using initWithCoder however there could be a boatload of files on the user's device? I really don't want to use a database, mainly because I would need to translate NSObjects and non-NSObjects like CGRects and CGPoints and Colors into/from SQL. I am open to other ideas on how to store and read game data to prevent using a singelton to store everything, everywhere. Thanks for your time.

    Read the article

  • Top 25 security issues for developers of web sites

    - by BizTalk Visionary
    Sourced from: CWE This is a brief listing of the Top 25 items, using the general ranking. NOTE: 16 other weaknesses were considered for inclusion in the Top 25, but their general scores were not high enough. They are listed in the On the Cusp focus profile. Rank Score ID Name [1] 346 CWE-79 Failure to Preserve Web Page Structure ('Cross-site Scripting') [2] 330 CWE-89 Improper Sanitization of Special Elements used in an SQL Command ('SQL Injection') [3] 273 CWE-120 Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') [4] 261 CWE-352 Cross-Site Request Forgery (CSRF) [5] 219 CWE-285 Improper Access Control (Authorization) [6] 202 CWE-807 Reliance on Untrusted Inputs in a Security Decision [7] 197 CWE-22 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') [8] 194 CWE-434 Unrestricted Upload of File with Dangerous Type [9] 188 CWE-78 Improper Sanitization of Special Elements used in an OS Command ('OS Command Injection') [10] 188 CWE-311 Missing Encryption of Sensitive Data [11] 176 CWE-798 Use of Hard-coded Credentials [12] 158 CWE-805 Buffer Access with Incorrect Length Value [13] 157 CWE-98 Improper Control of Filename for Include/Require Statement in PHP Program ('PHP File Inclusion') [14] 156 CWE-129 Improper Validation of Array Index [15] 155 CWE-754 Improper Check for Unusual or Exceptional Conditions [16] 154 CWE-209 Information Exposure Through an Error Message [17] 154 CWE-190 Integer Overflow or Wraparound [18] 153 CWE-131 Incorrect Calculation of Buffer Size [19] 147 CWE-306 Missing Authentication for Critical Function [20] 146 CWE-494 Download of Code Without Integrity Check [21] 145 CWE-732 Incorrect Permission Assignment for Critical Resource [22] 145 CWE-770 Allocation of Resources Without Limits or Throttling [23] 142 CWE-601 URL Redirection to Untrusted Site ('Open Redirect') [24] 141 CWE-327 Use of a Broken or Risky Cryptographic Algorithm [25] 138 CWE-362 Race Condition Cross-site scripting and SQL injection are the 1-2 punch of security weaknesses in 2010. Even when a software package doesn't primarily run on the web, there's a good chance that it has a web-based management interface or HTML-based output formats that allow cross-site scripting. For data-rich software applications, SQL injection is the means to steal the keys to the kingdom. The classic buffer overflow comes in third, while more complex buffer overflow variants are sprinkled in the rest of the Top 25.

    Read the article

  • Choosing Technology To Include In Software Design

    How many of us have been forced to select one technology over another when designing a new system? What factors do we and should we consider? How can we ensure the correct business decision is made? When faced with this type of decision it is important to gather as much information possible regarding each technology being considered as well as the project itself. Additionally, I tend to delay my decision about the technology until it is ultimately necessary to be made. The reason why I tend to delay such an important design decision is due to the fact that as the project progresses requirements and other factors can alter a decision for selecting the best technology for a project. Important factors to consider when making technology decisions: Time to Implement and Maintain Total Cost of Technology (including Implementation and maintenance) Adaptability of Technology Implementation Team’s Skill Sets Complexity of Technology (including Implementation and maintenance) orecasted Return On Investment (ROI) Forecasted Profit on Investment (POI) Of the factors to consider the ROI and POI weigh the heaviest because the take in to consideration the other factors when calculating the profitability and return on investments.For a real world example let us consider developing a web based lead management system for a new company. This system can either be hosted on Microsoft Windows based web server or on a Linux based web server. Important Factors for this Example Implementation Team’s Skill Sets Member 1  Skill Set: Classic ASP, ASP.Net, and MS SQL Server Experience: 10 years Member 2  Skill Set: PHP, MySQL, Photoshop and MS SQL Server Experience: 3 years Member 3  Skill Set: C++, VB6, ASP.Net, and MS SQL Server Experience: 12 years Total Cost of Technology (including Implementation and maintenance) Linux Initial Year: $5,000 (Random Value) Additional Years: $3,000 (Random Value) Windows Initial Year: $10,000 (Random Value) Additional Years: $3,000 (Random Value) Complexity of Technology Linux Large Learning Curve with user driven documentation Estimated learning cost: $30,000 Windows Minimal based on Teams skills with Microsoft based documentation Estimated learning cost: $5,000 ROI Linux Total Cost Initial Total Cost: $35,000 Additional Cost $3,000 per year Windows Total Cost Initial Total Cost: $15,000 Additional Cost $3,000 per year Based on the hypothetical numbers it would make more sense to select windows based web server because the initial investment of the technology is much lower initially compared to the Linux based web server.

    Read the article

  • How Do Top Performing High Tech Companies Measure Online Marketing Success?

    - by Charles Knapp
    You might expect a focus on Net Promoter scores, open rates, and click metrics. The real answers from top performers may surprise you. I've been working for a few months with Aberdeen Group and colleagues from IBM and Oracle to survey high technology firms worldwide on best practices in marketing and channel sales effectiveness.  Now, we will share the results of our original customer research in a new white paper and webcast. Register today to learn how leading High Tech companies are increasing their Return on Marketing Investment (ROMI) and growing channel sales revenue. Discover how top performing high tech companies manage and use customer data, measure marketing spend effectiveness, and support internal and channel sales. Learn how best in class high tech companies use enterprise data throughout their customer lifecycle -- messaging to leads, selling to prospects, and serving customers. Our speakers will be: Peter Ostrow, Research Director - Sales Effectiveness, Aberdeen Group David Lasher, Global Business Services Partner, IBM Jonathan Oomrigar, Vice President, Global High Technology Business Unit, Oracle Reserve your place now! This global webinar is on Tuesday, November 15, 10-11 am PST / 1-2 pm EST / 6-7 GMT / 7-8 CET

    Read the article

  • How to indicate reliability when reporting availability of competencies

    - by Jan Doggen
    We have employees with competencies: Pete Welder Carpenter Melissa Carpenter Assume they both work 40 hours/week, and have not yet been assigned work. We need to report the availability of these competencies, expressed in hours. As far as I can see now, we can report this in two ways: Method A. When someone has multiple competencies, count them both. Welder 40 hours Carpenter 80 hours Method B. When someone has multiple competencies, count an equal division of hours for each Welder 20 hours Carpenter 60 hours Method A has our preference: - A good planner will know to plan the least available competency first. If 30 hours of welding is planned, we will be left with 10 welder, 50 carpenter. - Method B has the disadvantage that the planner thinks he cannot plan the job when 30 hours of welding is required. However, if we report this we would like to give an estimate of the reliability of the numbers for each competency, i.e. how much are these over-reported? In my example A, would I say that carpenter is 100% over-reported, or 50%, or maybe another number? How would I calculate this for large numbers of competencies? I'm sure we are not the first ones dealing with this, is there a 'usual' way of doing this in planning? Additionally: - Would there be an even better method than A or B? - Optionally, we also have an preference order of competencies (like: use him/her in this order), Pete could be 1. welder 2. carpenter. Does this introduce new options?

    Read the article

  • Top Ten Reasons to Attend the 2015 Oracle Value Chain Summit

    - by Terri Hiskey
    Need justification to attend the 2015 Oracle Value Chain Summit? Check out these Top Ten Reasons you should register now for this event: 1. Get Results: 60% higher profits. 65% better earnings per share. 2-3x greater return on assets. Find out how leading organizations achieved these results when they transformed their supply chains. 2. Hear from the Experts: Listen to case studies from leading companies, and speak with top partners who have championed change. 3. Design Your Own Conference: Choose from more than 150 sessions offering deep dives on every aspect of supply chain management: Cross Value Chain, Maintenance, Manufacturing, Procurement, Product Value Chain, Value Chain Execution, and Value Chain Planning. 4. Get Inspired from Those Who Dare: Among the luminaries delivering keynote sessions are former SF 49ers quarterback Steve Young and Andrew Winston, co-author of one of the top-selling green business books, Green to Gold. 5. Expand Your Network: With 1500+ attendees, this summit is a networking bonanza. No other event gathers as many of the best and brightest professionals across industries, including tech experts and customers from the Oracle community. 6. Improve Your Skills: Enhance your expertise by joining NEW hands-on training sessions. 7. Perform a Road-Test: Try the latest IT solutions that generate operational excellence, manage risk, streamline production, improve the customer experience, and impact the bottom line. 8. Join Similar Birds-of-a-Feather: Engage industry peers with similar interests, or shared supply chain communities, in expanded roundtable discussions. 9. Gain Unique Insight: Speak directly with the product experts responsible for Oracle’s Value Chain Solutions. 10. Save $400: Take advantage of the Super Saver rate by registering before September 26, 2014.

    Read the article

  • Create Pivot collections much faster than DeepZoomTools CollectionCreator class

    - by John Conwell
    I've been playing with Microsoft Live Labs Pivot to create a hierarchy of collections all linked together to allow someone to explore a hierarchy of data visually. The problem has been the generation time of the entire hierarchy. I end up creating 500 - 600 collections total and it takes hours and hours using the CollectionCreator class that comes with the DeepZoomTools.  So digging around I found a way to make the actual DeepZoom collection creation wicked fast. Dont use the CollectionCreator!  Turns out Pivot doesnt actually use the image pyramid generated by the CollectionCreator. Or if it does, its only when you open a new collection it shows all the images zooming in. But once the zoom in is complete, Pivot uses the individual DeepZoom images. What Pivot does need is the xml generated by the CollectionCreator, which is in a very simple format.  So what i did was manually generate the xml for the collection image pyramid, and then create the folder structure required (one folder per level of the pyramid), and put a single pixel png file in each folder.  Now, I can create the required files and folders for 500 collections in about 10 seconds. Sweet! Now you still have to use the ImageCreator to create a DeepZoom image for each image in the collection and that still takes some time, but at least the total processing time is way better.

    Read the article

  • WiFi is not detecting in compaq presario c700

    - by charan
    Hi I have a Compaq Presario c700 laptop,I just installed Ubuntu 10.04 in my laptop. My wireless networks are not showing in network manager. charan@charan-laptop:~$ lspci -nn 00:00.0 Host bridge [0600]: Intel Corporation Mobile PM965/GM965/GL960 Memory Controller Hub [8086:2a00] (rev 03) 00:02.0 VGA compatible controller [0300]: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller [8086:2a02] (rev 03) 00:02.1 Display controller [0380]: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller [8086:2a03] (rev 03) 00:1b.0 Audio device [0403]: Intel Corporation 82801H (ICH8 Family) HD Audio Controller [8086:284b] (rev 03) 00:1c.0 PCI bridge [0604]: Intel Corporation 82801H (ICH8 Family) PCI Express Port 1 [8086:283f] (rev 03) 00:1d.0 USB Controller [0c03]: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #1 [8086:2830] (rev 03) 00:1d.1 USB Controller [0c03]: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #2 [8086:2831] (rev 03) 00:1d.2 USB Controller [0c03]: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #3 [8086:2832] (rev 03) 00:1d.7 USB Controller [0c03]: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 [8086:2836] (rev 03) 00:1e.0 PCI bridge [0604]: Intel Corporation 82801 Mobile PCI Bridge [8086:2448] (rev f3) 00:1f.0 ISA bridge [0601]: Intel Corporation 82801HEM (ICH8M) LPC Interface Controller [8086:2815] (rev 03) 00:1f.1 IDE interface [0101]: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) IDE Controller [8086:2850] (rev 03) 00:1f.2 SATA controller [0106]: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) SATA AHCI Controller [8086:2829] (rev 03) 00:1f.3 SMBus [0c05]: Intel Corporation 82801H (ICH8 Family) SMBus Controller [8086:283e] (rev 03) 01:00.0 Network controller [0280]: Broadcom Corporation BCM4311 802.11b/g WLAN [14e4:4311] (rev 02) 02:01.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ [10ec:8139] (rev 10)

    Read the article

  • 10.10 boots to command line login prompt

    - by greggory.hz
    I recently installed Ubuntu 10.10 on a computer that was previously running 10.04 (that worked fine). Now, each time I boot up, it starts up in a command line login prompt. I can login and it stays at the command line (as expected). I can then manually start gdm with sudo start gdm and it works fine. I can also enable compiz (using proprietary nvidia drivers) so I'm reasonably confident that it's not a driver problem (at least not in the sense that the drivers just flat out aren't working). Interestingly, if I leave it at the command prompt without logging in, after about 5 or 10 minutes, gnome starts up on its own. I'm not sure what is causing this. This is what dmesg | tail gives me after a manual start of gdm: [ 15.664166] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 270.18 Tue Jan 18 21:46:26 PST 2011 [ 15.991304] type=1400 audit(1297543976.953:11): apparmor="STATUS" operation="profile_load" name="/usr/share/gdm/guest-session/Xsession" pid=990 comm="apparmor_parser" [ 16.606986] eth0: link up, 100Mbps, full-duplex, lpa 0xCDE1 [ 18.798506] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro,commit=0 [ 26.740010] eth0: no IPv6 routers present [ 90.444593] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro,commit=0 [ 189.252208] audit_printk_skb: 21 callbacks suppressed [ 189.252213] type=1400 audit(1297544150.218:19): apparmor="STATUS" operation="profile_replace" name="/usr/lib/cups/backend/cups-pdf" pid=1876 comm="apparmor_parser" [ 189.252584] type=1400 audit(1297544150.218:20): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/cupsd" pid=1876 comm="apparmor_parser" [ 351.159585] lo: Disabled Privacy Extensions

    Read the article

  • Youtube video streaming slow

    - by Newbie
    When I try to stream youtube videos on my ubuntu 11.04, they don't stream smoothly. They buffer well and are choppy. Here's my Config: Laptop: Gateway NV58 Ram : 4 GB Ethernet Controller: Broadcom Corp NetLink BCM5784M Gigabit Ethernet PCIe (rev10) Let me know if you need more details. Output of lspci: 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 03) 00:1c.2 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 3 (rev 03) 00:1d.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.3 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1d.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation ICH9M/M-E SATA AHCI Controller (rev 03) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03) 02:00.0 Ethernet controller: Broadcom Corporation NetLink BCM5784M Gigabit Ethernet PCIe (rev 10) 04:00.0 Network controller: Intel Corporation WiFi Link 5100

    Read the article

  • Why don't %MEM values add up to mem in top?

    - by ben
    I'm currently debugging performance issues with my VPS and for that I'm trying to understand which of the processes eat the most memory. Reading top, here's what I get: Mem: 366544k total, 321396k used, 45148k free, 380k buffers Swap: 1048572k total, 592388k used, 456184k free, 7756k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 12339 ruby 20 0 844m 74m 2440 S 0 20.8 0:24.84 ruby 12363 ruby 20 0 844m 73m 1576 S 0 20.6 0:00.26 ruby 21117 ruby 20 0 171m 33m 1792 S 0 9.3 2:03.98 ruby 11846 ruby 20 0 858m 21m 1820 S 0 6.0 0:09.15 ruby 21277 ruby 20 0 219m 11m 1648 S 0 3.2 2:00.98 ruby 792 root 20 0 266m 10m 1024 S 0 3.0 1:40.06 ruby 532 mysql 20 0 234m 4760 1040 S 0 1.3 0:41.58 mysqld 793 root 20 0 250m 4616 984 S 0 1.3 1:20.55 ruby 586 root 20 0 156m 4532 848 S 0 1.2 6:17.10 god 12315 ruby 20 0 175m 2412 1900 S 0 0.7 0:07.55 ruby 3844 root 20 0 44036 2132 1028 S 0 0.6 1:08.22 ruby 10939 ruby 20 0 179m 1884 1724 S 0 0.5 0:08.33 ruby 4660 ruby 20 0 229m 1592 1440 S 0 0.4 2:55.46 ruby 3879 nobody 20 0 37428 964 520 S 0 0.3 0:01.99 nginx As you can see my memory is about 90% used (which is my issue) but when you add up the %MEM values, it goes to about 50-60% only. Same thing, RES doesn't add up to ~350mb. Why? Am I misunderstanding their meaning? Thanks

    Read the article

  • Dell Laptop's Bluetooth isn't detected by Ubuntu

    - by vishwaje
    I have Dell vostro 3500 Laptop. I Installed Ubuntu 12.04. It doesn't detect my bluetooth. below is my information. lsusb | grep Bluetooth Bus 001 Device 003: ID 0a5c:4500 Broadcom Corp. BCM2046B1 USB 2.0 Hub (part of BCM2046 Bluetooth) lsmod | grep bluetooth bluetooth 158438 10 bnep,rfcomm rfkill list 0: brcmwl-0: Wireless LAN Soft blocked: yes Hard blocked: no 1: dell-wifi: Wireless LAN Soft blocked: yes Hard blocked: no (wifi working fine. I disabled it using network menu. So, it's fine) dmesg | grep Bluetooth [ 21.981835] Bluetooth: Core ver 2.16 [ 21.981872] Bluetooth: HCI device and connection manager initialized [ 21.981877] Bluetooth: HCI socket layer initialized [ 21.981881] Bluetooth: L2CAP socket layer initialized [ 21.981891] Bluetooth: SCO socket layer initialized [ 21.986047] Bluetooth: RFCOMM TTY layer initialized [ 21.986059] Bluetooth: RFCOMM socket layer initialized [ 21.986065] Bluetooth: RFCOMM ver 1.11 [ 22.361783] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 22.361791] Bluetooth: BNEP filters: protocol multicast So How can I fix this? I saw other place (I don't remember which site) They said, some times, when in windows, if bluetooth disabled by windows driver, It can't be enabled by linux. So, I should install windows and enable blueethooth from windows and then re install linux. That is impossible for me, because I don't have Windows installation media. But I tried this, which is I installed windows xp on virtual box. Vbox showed me thow unknow USB devices. I connected them and installed windows bluethooth drivers to xp. But it didn't ditected Bluethooth either. Also, when I set off mode in hardware switch, those unknown devices disappear from Vbox usb device seletcion menu. So, they are definitely something to do with wifi or bluethooth. Please help me..

    Read the article

  • Release/Change management - best aproach

    - by Bob Rivers
    I asked this question an year ago in StackOverflow and never got a good answer. Since Programmers seems to be a better place to ask it, I'll give it a try... What is the better way to work with release management? More specifically what would be the best way to release packages? For example, assuming that you have a relatively stable system, a good quality assurance process (QA), etc. How do you prefer to release new versions? Let's assume that we are talking about a mid to large "centralized" web system (no clients), in-house development. This system can be considered "vital" for a corporate operations. I have a tendency to prefer to do this by releasing packets at regular intervals, not greater than 1 to 3 months. During this period, I will include into the package,fixes and improvements and make the implementation in production environment only once. But I've seen some people who prefer to place small changes in production, but with a greater frequency. The claim of these people is that by doing so, it is easier to identify bugs that have gone through the process of QA: in a package with 10 changes and another with only 1, it is much easier to know what caused the problem in the package with just one change... What is the opinion came from you?

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • links for 2010-12-17

    - by Bob Rhubart
    Overview of Oracle Enterprise Manager Management Packs How does Oracle Enterprise Manager Grid Control do so much across so many different systems? Porus Homi Havewala has the answers.  (tags: oracle otn grid soa entarch) How to overcome cloud computing hurdles - Computerworld What does it take to go from 'we should move to the cloud' to a successful cloud computing strategy? This excerpt from Silver Clouds, Dark Linings offers advice on crossing cloud chasms and developing a successful roadmap. (tags: ping.fm) Security in OBIEE 11g, Part 2 Guest blogger Pravin Janardanam continues the discussion about OBIEE 11g Authorization and other Security aspects. (tags: oracle otn security businessintelligence obiee) Oracle Fusion Middleware Security: A Quick Note about Oracle Access Manager 11g and WebLogic "OAM 11g integrates with WebLogic using the very same components used to integrate OAM 10.1.4.3. Under most circumstances, that means using the OAM Identity Asserter...which asserts the OAM_REMOTE_USER header as the user principal in the JAAS subject." - Brian Eidelman (tags: WebLogic oracle) Comparison Between Cluster Multicast Messaging and Unicast Messaging Mode Weblogic wonders!!! "When servers are in a cluster, these member servers communicate with each other by sending heartbeats and indicating that they are alive. For this communication between the servers, either unicast or multicast messaging is used." -- Divya Duryea (tags: weblogic oracle) Ron Batra: Cloud Computing Series: IV: Database.com, ExaData on Demand and connecting the dots Oracle ACE Direct Ron Batra offers his assessment of recent rumblings in the Cloud. (tags: oracle otn oracleace cloud database)

    Read the article

  • Matrix Multiplication with C++ AMP

    - by Daniel Moth
    As part of our API tour of C++ AMP, we looked recently at parallel_for_each. I ended that post by saying we would revisit parallel_for_each after introducing array and array_view. Now is the time, so this is part 2 of parallel_for_each, and also a post that brings together everything we've seen until now. The code for serial and accelerated Consider a naïve (or brute force) serial implementation of matrix multiplication  0: void MatrixMultiplySerial(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 1: { 2: for (int row = 0; row < M; row++) 3: { 4: for (int col = 0; col < N; col++) 5: { 6: float sum = 0.0f; 7: for(int i = 0; i < W; i++) 8: sum += vA[row * W + i] * vB[i * N + col]; 9: vC[row * N + col] = sum; 10: } 11: } 12: } We notice that each loop iteration is independent from each other and so can be parallelized. If in addition we have really large amounts of data, then this is a good candidate to offload to an accelerator. First, I'll just show you an example of what that code may look like with C++ AMP, and then we'll analyze it. It is assumed that you included at the top of your file #include <amp.h> 13: void MatrixMultiplySimple(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 14: { 15: concurrency::array_view<const float,2> a(M, W, vA); 16: concurrency::array_view<const float,2> b(W, N, vB); 17: concurrency::array_view<concurrency::writeonly<float>,2> c(M, N, vC); 18: concurrency::parallel_for_each(c.grid, 19: [=](concurrency::index<2> idx) restrict(direct3d) { 20: int row = idx[0]; int col = idx[1]; 21: float sum = 0.0f; 22: for(int i = 0; i < W; i++) 23: sum += a(row, i) * b(i, col); 24: c[idx] = sum; 25: }); 26: } First a visual comparison, just for fun: The beginning and end is the same, i.e. lines 0,1,12 are identical to lines 13,14,26. The double nested loop (lines 2,3,4,5 and 10,11) has been transformed into a parallel_for_each call (18,19,20 and 25). The core algorithm (lines 6,7,8,9) is essentially the same (lines 21,22,23,24). We have extra lines in the C++ AMP version (15,16,17). Now let's dig in deeper. Using array_view and extent When we decided to convert this function to run on an accelerator, we knew we couldn't use the std::vector objects in the restrict(direct3d) function. So we had a choice of copying the data to the the concurrency::array<T,N> object, or wrapping the vector container (and hence its data) with a concurrency::array_view<T,N> object from amp.h – here we used the latter (lines 15,16,17). Now we can access the same data through the array_view objects (a and b) instead of the vector objects (vA and vB), and the added benefit is that we can capture the array_view objects in the lambda (lines 19-25) that we pass to the parallel_for_each call (line 18) and the data will get copied on demand for us to the accelerator. Note that line 15 (and ditto for 16 and 17) could have been written as two lines instead of one: extent<2> e(M, W); array_view<const float, 2> a(e, vA); In other words, we could have explicitly created the extent object instead of letting the array_view create it for us under the covers through the constructor overload we chose. The benefit of the extent object in this instance is that we can express that the data is indeed two dimensional, i.e a matrix. When we were using a vector object we could not do that, and instead we had to track via additional unrelated variables the dimensions of the matrix (i.e. with the integers M and W) – aren't you loving C++ AMP already? Note that the const before the float when creating a and b, will result in the underling data only being copied to the accelerator and not be copied back – a nice optimization. A similar thing is happening on line 17 when creating array_view c, where we have indicated that we do not need to copy the data to the accelerator, only copy it back. The kernel dispatch On line 18 we make the call to the C++ AMP entry point (parallel_for_each) to invoke our parallel loop or, as some may say, dispatch our kernel. The first argument we need to pass describes how many threads we want for this computation. For this algorithm we decided that we want exactly the same number of threads as the number of elements in the output matrix, i.e. in array_view c which will eventually update the vector vC. So each thread will compute exactly one result. Since the elements in c are organized in a 2-dimensional manner we can organize our threads in a two-dimensional manner too. We don't have to think too much about how to create the first argument (a grid) since the array_view object helpfully exposes that as a property. Note that instead of c.grid we could have written grid<2>(c.extent) or grid<2>(extent<2>(M, N)) – the result is the same in that we have specified M*N threads to execute our lambda. The second argument is a restrict(direct3d) lambda that accepts an index object. Since we elected to use a two-dimensional extent as the first argument of parallel_for_each, the index will also be two-dimensional and as covered in the previous posts it represents the thread ID, which in our case maps perfectly to the index of each element in the resulting array_view. The kernel itself The lambda body (lines 20-24), or as some may say, the kernel, is the code that will actually execute on the accelerator. It will be called by M*N threads and we can use those threads to index into the two input array_views (a,b) and write results into the output array_view ( c ). The four lines (21-24) are essentially identical to the four lines of the serial algorithm (6-9). The only difference is how we index into a,b,c versus how we index into vA,vB,vC. The code we wrote with C++ AMP is much nicer in its indexing, because the dimensionality is a first class concept, so you don't have to do funny arithmetic calculating the index of where the next row starts, which you have to do when working with vectors directly (since they store all the data in a flat manner). I skipped over describing line 20. Note that we didn't really need to read the two components of the index into temporary local variables. This mostly reflects my personal choice, in some algorithms to break down the index into local variables with names that make sense for the algorithm, i.e. in this case row and col. In other cases it may i,j,k or x,y,z, or M,N or whatever. Also note that we could have written line 24 as: c(idx[0], idx[1])=sum  or  c(row, col)=sum instead of the simpler c[idx]=sum Targeting a specific accelerator Imagine that we had more than one hardware accelerator on a system and we wanted to pick a specific one to execute this parallel loop on. So there would be some code like this anywhere before line 18: vector<accelerator> accs = MyFunctionThatChoosesSuitableAccelerators(); accelerator acc = accs[0]; …and then we would modify line 18 so we would be calling another overload of parallel_for_each that accepts an accelerator_view as the first argument, so it would become: concurrency::parallel_for_each(acc.default_view, c.grid, ...and the rest of your code remains the same… how simple is that? Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Considering Embedding a Database? Choose MySQL!

    - by Bertrand Matthelié
    The M of the LAMP stack and the #1 database for Web-based applications, MySQL is also an extremely popular choice as embedded database. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Access our Resource Kit to discover the top reasons why:   3,000 ISVs and OEMs rely on MySQL as their embedded database 8 of the top 10 software vendors and hundreds of startups selected MySQL to power their cloud, on-premise and appliance-based offerings Leading mobile and SaaS providers ensure continuous service availability and scalability with lower cost and risk using MySQL Cluster. Learn how you can reduce costs and accelerate time to market while increasing performance and reliability. Access white papers, webinars, case studies and other resources in our Resource Kit.  

    Read the article

  • Java SE 6 Update 23 is released!

    - by [email protected]
    Java SE 6 Update 23 includes the following key changes: Java VisualVM 1.3.1 introduces the following features and enhancements:           o Added Java version and vendor information to the application Overview view           o Built on NetBeans Platform and profiler 6.9.1 Swing JMenuItem Corrections for right-to-left languages           o Non-default alignment and text orientation for the menu items in Swing have been fixed           o The position of the icon and the text where text used to overlap the icon in a menu item was corrected           o All platform Look and Feel configurations will now handle menu items in right-to-left language situations. Updated HotSpot 19 JVM Engine with improvements to overall performance and reliability. Additional Languages support in SuSE Linux Enterprise Server 10 and 11 on Chinese(Simplified), Chinese (Traditional), Japanese, and Korean locales. Supported system configuration, check - http://java.sun.com/javase/6/webnotes/install/system-configurations.htmlSee Release Notes for additional details.

    Read the article

  • Do you have a contract between the Product Owner and the Team?

    - by Martin Hinshelwood
    Working in Scrum it is useful to define a Sprint Contract between the Product Owner (PO) and the implementation Team. Doing this helps to improve common understanding in, and sometimes to enforce, the relationship between the PO and the Team. This is simply an agreement between the PO for one Sprint and is not really a commercial contract and should be confirmed via an e-mail at the beginning of every Sprint. “The implementation team agrees to do its best to deliver an agreed on set of features (scope) to a defined quality standard by the end of the sprint. (Ideally they deliver what they promised, or even a bit more.) The Product Owner agrees not to change his instructions before the end of the Sprint.” - Agile Project management (http://agilesoftwaredevelopment.com/blog/peterstev/10-agile-contracts#Sprint) Each of the Sprints in a Scrum project can be considered a mini-project that has Time (Sprint Length), Scope (Sprint Backlog), Quality (Definition of Done) and Cost (Team Size*Sprint Length). Only the scope can vary and this is measured every sprint. Figure: Good Example, the product owner should reply to the team and commit to the contract This Rule has been added to SSW’s Rules to better Scrum with TFS   Technorati Tags: SSW,Scrum,SSW Rules

    Read the article

  • WebCenter News, Best Practices & Resources: Check Out the Latest Oracle WebCenter Newsletter

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Check out the latest edition of the Oracle Information InDepth Newsletter featuring the latest Oracle WebCenter news, best practices and resources. In this issue, you’ll find: Five best practices for application integration from Oracle expert, John Brunswick A video demonstrating how to employ the advanced segmentation and targeting features in Oracle WebCenter Sites The latest webcasts and events on how to engage your customers and empower your business with Oracle WebCenter Want to get the newsletter delivered directly to your inbox? Subscribe here.

    Read the article

  • Cannot boot into ubuntu 12.10

    - by sriram
    Below given are the steps which I followed to install ubuntu 12.10 with existing windows 8 in my machine. I downloaded ubuntu 12.10 into my disk and made it bootable from my usb by selecting that iso file. Then restared my mahine and in BIOS I selected boot from usb. Went into Linux os and selected install ubuntu alongside windows 8. It asked for memory allocation and I selected 550 GB for Ubuntu and 404GB for Windows. After that it completed ubuntu installation. The booted into my windows 8 and used easyBCD to add a new entry. Ubuntu grup2 Now the easyBCD shows, There are a total of 4 entries listed in the bootloader. Default: Windows 8 Timeout: 10 seconds EasyBCD Boot Device: C:\ Entry #1 Name: Lenovo Recovery System BCD ID: {e58d0cb6-2eae-11e2-9d20-806e6f6e6963} Device: \Device\HarddiskVolume3 Bootloader Path: \EFI\Microsoft\Boot\LrsBootMgr.efi Entry #2 Name: EFI USB Device BCD ID: {e58d0cb5-2eae-11e2-9d20-806e6f6e6963} Device: Unknown Bootloader Path: Entry #3 Name: Windows 8 BCD ID: {current} Drive: C:\ Bootloader Path: \windows\system32\winload.efi Entry #4 Name: Ubuntu 12.10 BCD ID: {6f173570-3bce-11e2-be74-c0143dd589c0} Drive: C:\ Bootloader Path: \NST\AutoNeoGrub0.mbr Next I restarted my system and in the boot options it shows windows 8 and ubuntu 12.10 When I click on ubuntu it displays, \NST\AutoNeoGrub0.mbr status 0xc000007b The application or operating system cold not be loaded because a required file is missing or contains errors. Can you help me resolve this... Thanks :)

    Read the article

  • One Does Like To Code: DevoxxUK

    - by Tori Wieldt
    What's happening at Devoxx UK? I'll be talking to Rock Star speakers, Community leaders, authors, JSR leads and more.  This video is a short introduction.   Check out these great sessions: Thursday, June 12Perchance to Stream with Java 8by Paul Sandoz13:40 - 14:30 | Room 1 Making the Internet-of-Things a Reality with Embedded Javaby Simon Ritter11:50 - 12:40 | Room 4 Java SE 8 Lambdas and Streams Labby Simon Ritter17:00 - 20:00 | Room Mezzanine Safety Not Guaranteed: Sun. Misc. Unsafe and the Quest for Safe Alternativesby Paul Sandoz18:45 - 19:45 | Room 3 Join the Java EvolutionHeather VanCuraPatrick Curran19:45 – 20:45 | Room 2  Glassfish is Here to StayDavid DelabasseeAntonio Goncalves19:45 – 20:45 | Room Expo Here is the full line-up of sessions. Devoxx UK includes a Hackergarten, where can devs work an Open Source project of their choice. The Adopt OpenJDK and Adopt a JSR Program folks will be there to help attendees contribute back to Java SE and Java EE itself!   Saturday includes a special Devoxx4Kids event in conjunction with the London Java Community. It's design to teach 10-16 year-olds simple programming concepts, robotics, electronics, and games making. Workshops include LEGO Mindstorms (robotic engineering), Greenfoot (programming), Arduino (electronics), Scratch (games making), Minecraft Modding (game hacking) and NAO (robotic programming). Small fee, you must register. If you can't attend Devoxx UK in person, stay tuned to the YouTube/Java channel. I'll be doing plenty of interviews so you can join the fun from around the world. 

    Read the article

  • How should I determine my rates for writing custom software?

    - by Carson Myers
    For a custom software that will likely take a year or more to develop, how would I go about determining what to charge as a consultant? I'm having a hard time coming up with a number, and searches online are providing vastly different numbers (between $55/hr and $300/hr). I don't want to shoot too low because it's going to take me so much time (and I'm deferring my education for this project). I also don't want to shoot too high and get unpleasant looks and demand for justification. FWIW I live in Canada, and have approx. 10 years of development experience. I've read the "take your salary and divide it by 1000" rule of thumb, but the thing is I don't have a salary. Currently I'm just doing fairly small programming tasks for a friend who is starting a marketing company, pricing each task fairly arbitrarily. I don't know what I would make over the course of a year doing it, but it would be incredibly low. My responsibilities for the project would be the architecture, programming, database, server, and UX to some degree. It's going to be a public facing web service so I will also need to put a lot of effort into security and scalability. Any advice or experience?

    Read the article

< Previous Page | 734 735 736 737 738 739 740 741 742 743 744 745  | Next Page >