Search Results

Search found 1172 results on 47 pages for 'sparc spread'.

Page 28/47 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Why do some programmers think there is a contrast between theory and practice?

    - by Giorgio
    Comparing software engineering with civil engineering, I was surprised to observe a different way of thinking: any civil engineer knows that if you want to build a small hut in the garden you can just get the materials and go build it whereas if you want to build a 10-storey house you need to do quite some maths to be sure that it won't fall apart. In contrast, speaking with some programmers or reading blogs or forums I often find a wide-spread opinion that can be formulated more or less as follows: theory and formal methods are for mathematicians / scientists while programming is more about getting things done. What is normally implied here is that programming is something very practical and that even though formal methods, mathematics, algorithm theory, clean / coherent programming languages, etc, may be interesting topics, they are often not needed if all one wants is to get things done. According to my experience, I would say that while you do not need much theory to put together a 100-line script (the hut), in order to develop a complex application (the 10-storey building) you need a structured design, well-defined methods, a good programming language, good text books where you can look up algorithms, etc. So IMO (the right amount of) theory is one of the tools for getting things done. So my question is why do some programmers think that there is a contrast between theory (formal methods) and practice (getting things done)? Is software engineering (building software) perceived by many as easy compared to, say, civil engineering (building houses)? Or are these two disciplines really different (apart from mission-critical software, software failure is much more acceptable than building failure)?

    Read the article

  • Mise à jour disponible de la tarification et des meilleures pratiques partenaire

    - by swalker
    Cliquez ici pour accéder à la mise à jour des meilleures pratiques partenaire du 25 octobre 2011 * (PDF) Que contient la mise à jour de la tarification et des meilleures pratiques partenaire du 25 octobre ? Tarification et licence Exalogic and SPARC SuperCluster Update Oracle Mise à jour concernant les technologies Mise à jour concernant Oracle Fusion Applications Mise à jour concernant le service Oracle Fusion Cloud Mise à jour concernant Oracle Application Integration Architecture Mise à jour concernant les applications Siebel CRM Mise à jour concernant Oracle CRM On Demand Mise à jour concernant Business Process Outsourcing Nonobstant toute disposition contraire des contrats de distribution des partenaires, les devis valides existants émis par des partenaires à l'intention d'utilisateurs finaux avant le 1er septembre 2011, et qui sont affectés par la mise à jour de la tarification et des conditions de licence du 25 octobre 2011 restent valables et les commandes passées par des partenaires en vertu de ces devis seront honorées jusqu'au 30 novembre 2011. Les devis émis par des partenaires à l'intention d'utilisateurs finaux en date du ou après le 1er septembre 2011 sont soumis aux conditions générales des contrats de distribution des partenaires. Que devez-vous faire ? Rendez-vous régulièrement sur la page des Mises à jour de la tarification et des meilleures pratiques partenaire du portail OPN pour en savoir plus sur ces mises à jour, connaître les derniers tarifs, conditions de licence et meilleures pratiques partenaire et accéder aux ressources applicables. Informations complémentaires Pour accéder aux mises à jour de la tarification et des meilleures pratiques partenaire et aux archives de toutes les mises à jour de la tarification et des meilleures pratiques partenaire, cliquez ici. * Oracle Confidentiel : Les informations contenues dans cette communication s'adressent aux membres du programme Oracle PartnerNetwork. Ces informations sont des informations confidentielles Oracle et ne peuvent être utilisées que dans le cadre de la distribution ou de l'implémentation par vos soins de produits ou services Oracle à des utilisateurs finaux ou à des partenaires Oracle agréés.

    Read the article

  • November eSTEP nyhedsbrev til hardware partner presales

    - by user12842157
    Kære partner,Vi vil hermed gøre dig opmærksom på at November versionen af vores eSTEP nyhedsbrev nu kan findes på eSTEP portalen. Du finder omtalte nyhedsbrev på vores portal under eSTEP News ---> Latest Newsletter. For at få access til portalen skal du bruge linket nederst i denne blog. Nyt fra Oracle: Reflektioner over Oracle OpenWorld, Oracle Buys GoAhead Det tekniske hjørne: T4 processor, SPARC SuperCluster T4-4, Pillar Axiom 600,  Oracle ZFS Appliance,  Hybrid Columnar Compression Support for ZFS Storage Appliances and Pillar Axiom Storage Systems, Oracle Exalytics In-memory Machine, Oracle Big Data Appliance, Oracle Database Express Edition 11g Release 2(Oracle Database XE), Oracle Public Cloud Træning og events: eSTEP Events Schedule, Recently Delivered TechCasts, Delivered Campaigns in 2011, Q&A covering Oracle Database Appliance How to ...: Oracle Server Finder - choose the system that is right for your, Power calculator for all the HW, SW documentation search , TO YOUR ATTENTION - Remarks to new configuration-options for 7120 URL: http://launch.oracle.com/PIN: er sendt til vores kontaktliste, ellers henvend dig til : [email protected] versioner af dette nyhedsbrev kan findes på portalen under "Archived Newsletters", mere information findes også under Events, Download og Links.Vi værdsætter enhver feed back på indholdet på portalen og anden information vi leverer.Med venlig hilsenPartner HW Enablement EMEA

    Read the article

  • Solaris 11 : les nouveautés vues par les équipes de développement

    - by Eric Bezille
    Translate in English  Pour ceux qui ne sont pas dans la liste de distribution de la communauté des utilisateurs Solaris francophones, voici une petite compilation de liens sur les blogs des développeurs de Solaris 11 et qui couvre en détails les nouveautés dans de multiples domaines.  Les nouveautés côté Desktop What's new on the Solaris 11 Desktop ? S11 X11: ye olde window system in today's new operating system Accessible Oracle Solaris 11 - released ! Les outils de développements Nagging As a Strategy for Better Linking: -z guidance Much Ado About Nothing: Stub Objects Using Stub Objects The Stub Proto: Not Just For Stub Objects Anymore elffile: ELF Specific File Identification Utility Le nouveau système de packaging : Image Packaging System (IPS) Replacing the Application Packaging Developer's guide IPS Self-assembly - Part 1: overlays Self Assembly - Part 2: Multiple Packages Delevering configuration La sécurité renforcée dans Solaris Completely disabling root logins in Solaris 11 Passwork (PAM) caching for Solaris su - "a la sudo" User home directory encryption with ZFS My 11 favorite Solaris 11 features (autour de la sécurité) - par Darren Moffat Exciting crypto advances with the T4 processor and Oracle Solaris 11 SPARC T4 OpenSSL Engine Solaris AESNI OpenSSL Engine for Intel Westmere Gestion et auto-correction d'incident - "Self-Healing" : Service Management Facility (SMF) & Fault Management Architecture (FMA)  Introducing SMF Layers Oracle Solaris 11 - New Fault Management Features Virtualisation : Oracle Solaris Zones These are 11 of my favorite things! (autour des zones) - par Mike Gerdts Immutable Zones on Encrypted ZFS The IPS System Repository (avec les zones) - par Tim Foster Quelques bonus de la communauté Solaris  Solaris 11 DTrace syscall Provider Changes Solaris 11 - hostmodel (Control send/receive behavior for IP packets on a multi-homed system) A Quick Tour of Oracle Solaris 11 Pour terminer, je vous engage également à consulter ce document de référence fort utile :  Transition from Oracle Solaris 10 to Oracle Solaris 11 Bonne lecture ! Translate in English 

    Read the article

  • How to facilitate code reviews in a small team for embedded software?

    - by Adam Lewis
    Short Question Does a cost-effective tool / workflow exist to facilitate code reviews in a small team? More specifically, a small team that relies on post-commit code reviews. Background Our team currently consists of 3 full time and 1 part time software engineers, with plans on hiring more in the near future. Due to our team size and volume of projects we all must juggle, the pre-commit workflow that major tools (such as Review Board and Code Collaborator) use is not obtainable for us right now. The best we can do at the moment is to perform post-commit reviews before major releases or as time permits. Nearly all of our projects are hosted on RepositoryHosting.com (which I highly recommend) and contain a mixture of SVN and GIT repositories. Current Thoughts Since I cannot find a tool that fits our needs right now, I am turning to TRAC that is built into our repository's site. At the moment we use TRAC to file tickets and track milestones, so to me this seems like a natural fit for code review results as well. The direction I am heading in right now is to use a spread sheet(s) to log all of the bugs and comments. Do some macro magic to get it in a format that I can use TRAC's import ticket method and use TRAC's ticketing system to create the action items / bug reports automatically. The auto ticket generation is darn near a must have, adding in bugs and comments one at a time from a web-gui is really painful. Secondary Question If this workflow makes sense, is there a good / standard template to use as a code review log?

    Read the article

  • Oracle Solaris at OpenWorld Tokyo 2012

    - by Markus Weber
    Oracle OpenWorld Tokyo will open its doors on Wednesday, April 4 2012, until Friday, April 6 2012, in Roppongi.I've you been in Tokyo as a Gaijin, or foreigner, you know exactly where that it. Many of Oracle's top executives will be there, including Larry Ellison, Mark Hurd, and John Fowler. The keynotes that they are covering will be very interesting, for sure. Now, whether you will actually be there, or not, you might still find it interesting that several great Solaris-related sessions will be held there, especially as part of the "Oracle Develop" track, such as: "Oracle Solaris 11 - Developers Need To Know" "How to build high performance and high security Oracle Database environment with Oracle SPARC/Solaris" "Oracle Solaris Tuning Contest" "IT Assets preservation and constructive migration with Oracle Solaris virtualization" And of course John Fowler's keynote "Server and Storage Systems Strategy".The complete schedule in English can be found here. We hope you can make it. If not, there will always be the San Francisco one.

    Read the article

  • How do we keep Active Directory resilient across multiple sites?

    - by Alistair Bell
    I handle much of the IT for a company of around 100 people, spread across about five sites worldwide. We're using Active Directory for authentication, mostly served to Linux (CentOS 5) systems via LDAP. We've been suffering through a spate of events where the IP tunnel between the two major sites goes down and the secondary domain controller at one site can't contact the primary domain controller at the other. It seems that the secondary domain controller starts denying user authentication within minutes of losing connectivity to the primary. How do we make the secondary domain controller more resilient to downtime? Is there a way for it to cache the entire directory and/or at least keep enough information locally to survive a multi-hour disconnection? (We're all in a single organizational unit if that makes any difference.) (The servers here are Windows Server 2003; don't assume that we set this up correctly. I'm a software engineer, not an IT specialist.)

    Read the article

  • How to fix E: Internal Error, No file name for libc6

    - by Loren Ramly
    How to fix E: Internal Error, No file name for libc6, Like that will show If I do: $ sudo apt-get upgrade or $ sudo apt-get install package This is example : $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: ginn hplip hplip-data libdrm-dev libdrm-intel1 libdrm-nouveau1a libdrm-radeon1 libdrm2 libgrip0 libhpmud0 libkms1 libsane-hpaio libunity-2d-private0 libunity-core-5.0-5 linux-generic-pae linux-headers-generic-pae linux-image-generic-pae printer-driver-hpcups printer-driver-hpijs unity unity-2d-common unity-2d-panel unity-2d-shell unity-2d-spread unity-common unity-services The following packages will be upgraded: alsa-base firefox firefox-globalmenu firefox-gnome-support firefox-locale-en icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-7-jre-jamvm libdbus-glib-1-2 libdbus-glib-1-dev libgnutls-dev libgnutls-openssl27 libgnutls26 libgnutlsxx27 libssl-dev libssl-doc libssl1.0.0 linux-sound-base openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openjdk-7-jdk openjdk-7-jre openjdk-7-jre-headless openjdk-7-jre-lib openssl sudo 27 upgraded, 0 newly installed, 0 to remove and 26 not upgraded. 3 not fully installed or removed. Need to get 0 B/126 MB of archives. After this operation, 3,072 B of additional disk space will be used. Do you want to continue [Y/n]? y E: Internal Error, No file name for libc6 I have follow instruction from here E: Internal Error, No file name for libssl1.0.0 . Which do: sudo apt-get update sudo apt-get clean sudo apt-get install -fy sudo dpkg -i /var/cache/apt/archives/*.deb sudo dpkg --configure -a sudo apt-get install -fy sudo apt-get dist-upgrade But stuck with same error E: Internal Error, No file name for libc6 when do command sudo apt-get install -fy. And I've been looking on google, but have not been successful until now. Thanks.

    Read the article

  • Allow JMX connection on JVM 1.6.x

    - by Martin Müller
    While trying to monitor a JVM on a remote system using visualvm the activation of JMX gave me some challenges. Dr Google and my employers documentation quickly revealed some -D opts needed for JMX, but strangely it only worked for a Solaris 10 system (my setup: MacOS laptop monitoring SPARC Solaris based JVMs) On S11 with the same opts I saw that "my" JVM listening on port 3000 (which I chose for JMX), but visualvm was not able to get a connection. Finally I found out that at least my S11 installation needed an explicit setting of the RMI host name. This what finally worked:         -Dcom.sun.management.jmxremote=true \        -Dcom.sun.management.jmxremote.ssl=false \        -Dcom.sun.management.jmxremote.authenticate=false \        -Dcom.sun.management.jmxremote.port=3000 \        -Djava.rmi.server.hostname=s11name.us.oracle.com \ Maybe this post saves someone else the time I spent on research 

    Read the article

  • Cross-platform, human-readable, du on root partition that truly ignores other filesystems

    - by nice_line
    I hate this so much: Linux builtsowell 2.6.18-274.7.1.el5 #1 SMP Mon Oct 17 11:57:14 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux df -kh Filesystem Size Used Avail Use% Mounted on /dev/mapper/mpath0p2 8.8G 8.7G 90M 99% / /dev/mapper/mpath0p6 2.0G 37M 1.9G 2% /tmp /dev/mapper/mpath0p3 5.9G 670M 4.9G 12% /var /dev/mapper/mpath0p1 494M 86M 384M 19% /boot /dev/mapper/mpath0p7 7.3G 187M 6.7G 3% /home tmpfs 48G 6.2G 42G 14% /dev/shm /dev/mapper/o10g.bin 25G 7.4G 17G 32% /app/SIP/logs /dev/mapper/o11g.bin 25G 11G 14G 43% /o11g tmpfs 4.0K 0 4.0K 0% /dev/vx lunmonster1q:/vol/oradb_backup/epmxs1q1 686G 507G 180G 74% /rpmqa/backup lunmonster1q:/vol/oradb_redo/bisxs1q1 4.0G 1.6G 2.5G 38% /bisxs1q/rdoctl1 lunmonster1q:/vol/oradb_backup/bisxs1q1 686G 507G 180G 74% /bisxs1q/backup lunmonster1q:/vol/oradb_exp/bisxs1q1 2.0T 1.1T 984G 52% /bisxs1q/exp lunmonster2q:/vol/oradb_home/bisxs1q1 10G 174M 9.9G 2% /bisxs1q/home lunmonster2q:/vol/oradb_data/bisxs1q1 52G 5.2G 47G 10% /bisxs1q/oradata lunmonster1q:/vol/oradb_redo/bisxs1q2 4.0G 1.6G 2.5G 38% /bisxs1q/rdoctl2 ip-address1:/vol/oradb_home/cspxs1q1 10G 184M 9.9G 2% /cspxs1q/home ip-address2:/vol/oradb_backup/cspxs1q1 674G 314G 360G 47% /cspxs1q/backup ip-address2:/vol/oradb_redo/cspxs1q1 4.0G 1.5G 2.6G 37% /cspxs1q/rdoctl1 ip-address2:/vol/oradb_exp/cspxs1q1 4.1T 1.5T 2.6T 37% /cspxs1q/exp ip-address2:/vol/oradb_redo/cspxs1q2 4.0G 1.5G 2.6G 37% /cspxs1q/rdoctl2 ip-address1:/vol/oradb_data/cspxs1q1 160G 23G 138G 15% /cspxs1q/oradata lunmonster1q:/vol/oradb_exp/epmxs1q1 2.0T 1.1T 984G 52% /epmxs1q/exp lunmonster2q:/vol/oradb_home/epmxs1q1 10G 80M 10G 1% /epmxs1q/home lunmonster2q:/vol/oradb_data/epmxs1q1 330G 249G 82G 76% /epmxs1q/oradata lunmonster1q:/vol/oradb_redo/epmxs1q2 5.0G 609M 4.5G 12% /epmxs1q/rdoctl2 lunmonster1q:/vol/oradb_redo/epmxs1q1 5.0G 609M 4.5G 12% /epmxs1q/rdoctl1 /dev/vx/dsk/slaxs1q/slaxs1q-vol1 183G 17G 157G 10% /slaxs1q/backup /dev/vx/dsk/slaxs1q/slaxs1q-vol4 173G 58G 106G 36% /slaxs1q/oradata /dev/vx/dsk/slaxs1q/slaxs1q-vol5 75G 952M 71G 2% /slaxs1q/exp /dev/vx/dsk/slaxs1q/slaxs1q-vol2 9.8G 381M 8.9G 5% /slaxs1q/home /dev/vx/dsk/slaxs1q/slaxs1q-vol6 4.0G 1.6G 2.2G 42% /slaxs1q/rdoctl1 /dev/vx/dsk/slaxs1q/slaxs1q-vol3 4.0G 1.6G 2.2G 42% /slaxs1q/rdoctl2 /dev/mapper/appoem 30G 1.3G 27G 5% /app/em Yet, I equally, if not quite a bit more, also hate this: SunOS solarious 5.10 Generic_147440-19 sun4u sparc SUNW,SPARC-Enterprise Filesystem size used avail capacity Mounted on kiddie001Q_rpool/ROOT/s10s_u8wos_08a 8G 7.7G 1.3G 96% / /devices 0K 0K 0K 0% /devices ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 15G 1.8M 15G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab fd 0K 0K 0K 0% /dev/fd kiddie001Q_rpool/ROOT/s10s_u8wos_08a/var 31G 8.3G 6.6G 56% /var swap 512M 4.6M 507M 1% /tmp swap 15G 88K 15G 1% /var/run swap 15G 0K 15G 0% /dev/vx/dmp swap 15G 0K 15G 0% /dev/vx/rdmp /dev/dsk/c3t4d4s0 3 20G 279G 41G 88% /fs_storage /dev/vx/dsk/oracle/ora10g-vol1 292G 214G 73G 75% /o10g /dev/vx/dsk/oec/oec-vol1 64G 33G 31G 52% /oec/runway /dev/vx/dsk/oracle/ora9i-vol1 64G 33G 31G 59% /o9i /dev/vx/dsk/home 23G 18G 4.7G 80% /export/home /dev/vx/dsk/dbwork/dbwork-vol1 292G 214G 73G 92% /db03/wk01 /dev/vx/dsk/oradg/ebusredovol 2.0G 475M 1.5G 24% /u21 /dev/vx/dsk/oradg/ebusbckupvol 200G 32G 166G 17% /u31 /dev/vx/dsk/oradg/ebuscrtlvol 2.0G 475M 1.5G 24% /u20 kiddie001Q_rpool 31G 97K 6.6G 1% /kiddie001Q_rpool monsterfiler002q:/vol/ebiz_patches_nfs/NSA0304 203G 173G 29G 86% /oracle/patches /dev/odm 0K 0K 0K 0% /dev/odm The people with the authority don't rotate logs or delete packages after install in my environment. Standards, remediation, cohesion...all fancy foreign words to me. ============== How am I supposed to deal with / filesystem full issues across multiple platforms that have a devastating number of mounts? On Red Hat el5, du -x apparently avoids traversal into other filesystems. While this may be so, it does not appear to do anything if run from the / directory. On Solaris 10, the equivalent flag is du -d, which apparently packs no surprises, allowing Sun to uphold its legacy of inconvenience effortlessly. (I'm hoping I've just been doing it wrong.) I offer up for sacrifice my Frankenstein's monster. Tell me how ugly it is. Tell me I should download forbidden 3rd party software. Tell me I should perform unauthorized coreutils updates, piecemeal, across 2000 systems, with no single sign-on, no authorized keys, and no network update capability. Then, please help me make this bastard better: pwd / du * | egrep -v "$(echo $(df | awk '{print $1 "\n" $5 "\n" $6}' | \ cut -d\/ -f2-5 | egrep -v "[0-9]|^$|Filesystem|Use|Available|Mounted|blocks|vol|swap")| \ sed 's/ /\|/g')" | egrep -v "proc|sys|media|selinux|dev|platform|system|tmp|tmpfs|mnt|kernel" | \ cut -d\/ -f1-2 | sort -k2 -k1,1nr | uniq -f1 | sort -k1,1n | cut -f2 | xargs du -shx | \ egrep "G|[5-9][0-9]M|[1-9][0-9][0-9]M" My biggest failure and regret is that it still requires a single character edit for Solaris: pwd / du * | egrep -v "$(echo $(df | awk '{print $1 "\n" $5 "\n" $6}' | \ cut -d\/ -f2-5 | egrep -v "[0-9]|^$|Filesystem|Use|Available|Mounted|blocks|vol|swap")| \ sed 's/ /\|/g')" | egrep -v "proc|sys|media|selinux|dev|platform|system|tmp|tmpfs|mnt|kernel" | \ cut -d\/ -f1-2 | sort -k2 -k1,1nr | uniq -f1 | sort -k1,1n | cut -f2 | xargs du -shd | \ egrep "G|[5-9][0-9]M|[1-9][0-9][0-9]M" This will exclude all non / filesystems in a du search from the / directory by basically munging an egrepped df from a second pipe-delimited egrep regex subshell exclusion that is naturally further excluded upon by a third egrep in what I would like to refer to as "the whale." The munge-fest frantically escalates into some xargs du recycling where -x/-d is actually useful, and a final, gratuitous egrep spits out a list of directories that almost feels like an accomplishment: Linux: 54M etc/gconf 61M opt/quest 77M opt 118M usr/ ##===\ 149M etc 154M root 303M lib/modules 313M usr/java ##====\ 331M lib 357M usr/lib64 ##=====\ 433M usr/lib ##========\ 1.1G usr/share ##=======\ 3.2G usr/local ##========\ 5.4G usr ##<=============Ascending order to parent 94M app/SIP ##<==\ 94M app ##<=======Were reported as 7gb and then corrected by second du with -x. Solaris: 63M etc 490M bb 570M root/cores.ric.20100415 1.7G oec/archive 1.1G root/packages 2.2G root 1.7G oec Guess what? It's really slow. Edit: Are there any bash one-liner heroes out there than can turn my bloated abomination into divine intervention, or at least something resembling gingerly copypasta?

    Read the article

  • New ATG Web Commerce Specialization is Hot, Hot, Hot

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The roof, the roof, the roof is on fire! Record breaking temperatures aren’t the only things raising the thermometer this summer –not since the new ATG Specialization became available, and get this – we already have a list of partners who have achieved their ATG Web Commerce Specialization, including: Accenture, AAXIS Commerce, Knowledge Path, ObjectEdge, Professional Access and ThinkWrap. Now that’s just sizzling! As part of this smokin’ hot Specialization, Oracle is offering ATG Commerce 10 Implementation Developer Boot Camps. Through direct hands-on experience, and technical training, developers and software architects will gain some serious insight into best practices, as well as relevant and applicable implementation experience to keep cool under pressure. So if you’re ready to stand-out, be sensational and separate yourself from the competition, learn about the steps you need to take to become ATG Web Commerce Specialized today, and don't forget to spread the word over Facebook and Twitter! Setting Fire to the Rain, The OPN Communications Team

    Read the article

  • Does DirectX implement Triple Buffering?

    - by Asik
    As AnandTech put it best in this 2009 article: In render ahead, frames cannot be dropped. This means that when the queue is full, what is displayed can have a lot more lag. Microsoft doesn't implement triple buffering in DirectX, they implement render ahead (from 0 to 8 frames with 3 being the default). The major difference in the technique we've described here is the ability to drop frames when they are outdated. Render ahead forces older frames to be displayed. Queues can help smoothness and stuttering as a few really quick frames followed by a slow frame end up being evened out and spread over more frames. But the price you pay is in lag (the more frames in the queue, the longer it takes to empty the queue and the older the frames are that are displayed). As I understand it, DirectX "Swap Chain" is merely a render ahead queue, i.e. buffers cannot be dropped; the longer the chain, the greater the input latency. At the same time, I find it hard to believe that the most widely used graphics API would not implement such fundamental functionality correctly. Is there a way to get proper triple buffered vertical synchronisation in DirectX?

    Read the article

  • Sharing storage on Linux and Solaris

    - by devlearn
    I'm looking for a solution in order to share a san mounted volume between several hosts running on Linux (RHEL) and/or Solaris (Sparc). Note that I basically need to share a set of directories containing large binary files that are accessed in random R/W mode. I have the following reqs : keep the data on the SAN suitable i/o performances as the software is pretty demanding on IOPS stick to a shared file system as I can't afford a cluster fs (lack of MDS/OSS infrastructure) compression could be really usefull For now I've found only the following candidates : GFS2 , supports Linux only, no compression VxFS , supports Linux and Solaris, compression supported So if you have some suggestions for this list, I'll really welcome them. Thanks in advance,

    Read the article

  • What are the possible options for AI path-finding etc when the world is "partitionned"?

    - by Sebastien Diot
    If you anticipate a large persistent game world, and you don't want to end up with some game server crashing due to overload, then you have to design from the ground up a game world that is partitioned in chunks. This is in particular true if you want to run your game servers in the cloud, where each individual VM is relatively week, and memory and CPU are at a premium. I think the biggest challenge here is that the player receives all the parts around the location of the avatar, but mobs/monsters are normally located in the server itself, and can only directly access the data about the part of the world that the server own. So how can we make the AI behave realistically in that context? It can send queries to the other servers that own the neighboring parts, but that sounds rather network intensive and latency prone. It would probably be more performant for each mob AI to be spread over the neighboring parts, and proactively send the relevant info to the part that contains the actual mob atm. That would also reduce the stress in a mob crossing a border between two parts, and therefore "switching server". Have you heard of any AI design that solves those issues? Some kind of distributed AI brain? Maybe some kind of "agent" community working together through message passing?

    Read the article

  • 2-D Codes in Retail

    - by David Dorf
    The UPC you find on packaging is a one-dimensional barcode that's been in use, in one form or another, since the 1970s. While its a good symbology to encode numbers like a product identifier, its not really big enough to hold much more. It also requires a barcode scanner (like those connected to the POS), although iPhone apps like RedLaser have proved a mobile camera can be made to work in many situations. The next generation barcodes are two-dimensional and therefore capable of holding much more information as well as being more conducive to cameras. The most popular format is the QR Code, widely used in Japan because almost every mobile phone has a built-in reader. A typical use for QR Codes is to embed a URL so that that a mobile phone can quickly navigate to the specified web page. QR Codes can be found on posters, billboards, catalogs, and circulars. Speaking of which, Best Buy recently put a QR code in their circular as shown below. If fact, they even updated their iPhone application to include a QR Code reader. I was able to scan the barcode above right from the screen with my iPhone without issues, even though its fairly small in this image. Clearly they are planning to incorporate more QR Codes in their stores and advertising. If you haven't seen QR Codes before, you're not looking hard enough. They are around and will continue to spread.

    Read the article

  • Multiple vulnerabilities in Firefox

    - by RitwikGhoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-1960 Information Exposure vulnerability 5.0 Firefox Solaris 10 SPARC: 145080-12 X86: 145081-11 CVE-2012-1970 Denial of Service (DoS) vulnerability 10.0 CVE-2012-1971 Denial of Service (DoS) vulnerability 9.3 CVE-2012-1972 Resource Management Errors vulnerability 10.0 CVE-2012-1973 Resource Management Errors vulnerability 10.0 CVE-2012-1974 Resource Management Errors vulnerability 10.0 CVE-2012-1975 Resource Management Errors vulnerability 10.0 CVE-2012-1976 Resource Management Errors vulnerability 10.0 CVE-2012-3956 Resource Management Errors vulnerability 10.0 CVE-2012-3957 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-3958 Resource Management Errors vulnerability 10.0 CVE-2012-3959 Resource Management Errors vulnerability 10.0 CVE-2012-3960 Resource Management Errors vulnerability 10.0 CVE-2012-3961 Resource Management Errors vulnerability 10.0 CVE-2012-3962 Arbitrary code execution vulnerability 9.3 CVE-2012-3963 Resource Management Errors vulnerability 10.0 CVE-2012-3964 Resource Management Errors vulnerability 10.0 CVE-2012-3966 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-3967 Arbitrary code execution vulnerability 6.8 CVE-2012-3968 Resource Management Errors vulnerability 10.0 CVE-2012-3969 Numeric Errors vulnerability 9.3 CVE-2012-3970 Resource Management Errors vulnerability 10.0 CVE-2012-3972 Information Exposure vulnerability 5.0 CVE-2012-3974 Resource Management Errors vulnerability 6.9 CVE-2012-3976 Denial of Service (DoS) vulnerability 5.8 CVE-2012-3978 Permissions, Privileges, and Access Controls vulnerability 6.8 CVE-2012-3980 Improper Control of Generation of Code ('Code Injection') vulnerability 9.3 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Canonicals with differing content

    - by Jimbo Jonny
    Interesting conundrum here with canonicals. Lets say I have a site with a "verified" system where other websites can become so and so "verified". Their url to send people to to confirm verification is something like "blah.com/verify/company1" and "blah.com/verify/company2". But logically "blah.com/verify" itself is not verifying anyone in particular, so it redirects to the signup form to get verified, at "blah.com/verify/register" As far as the actual companies registered, I figure it doesn't make sense to index every individual url with only the tiny difference of which company name it's saying yay or nay to being verified, so canonicals could come in handy on those pages to condense the indexing. Yet making "blah.com/verify" the canonical "hub" doesn't work well because it's a signup form, not a verification page, so technically has quite different content from the various verification pages themselves. But at the same time it's a bit unfair to choose 1 company to point all the canonical benefits too to use that as the "hub", yet a bit wasteful to have google index every individual verification page and spread out all that linkjuice. Basically, I'm just looking for advice, what's best for this from a search engine standpoint?

    Read the article

  • does ubuntu 11.10 is support ns2.29

    - by nasser
    I need your help, I'm a beginner on NS2 , and I'm trying to install ns2.29 on ubuntu 11.10 32bits but i can't. This message appear and installation stopped : Build tcl8.4.11 ============================================================ loading cache ./config.cache checking whether to use symlinks for manpages... no checking whether to compress the manpages... no checking whether to add a package name suffix for the manpages... no checking for gcc... gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no checking whether we are using GNU C... yes checking whether gcc accepts -g... yes checking for building with threads... no (default) checking if the compiler understands -pipe... yes checking how to run the C preprocessor... gcc -pipe -E checking for sin... no checking for main in -lieee... yes checking for main in -linet... no checking for net/errno.h... no checking for connect... yes checking for gethostbyname... yes checking how to build libraries... static checking for ranlib... ranlib checking if 64bit support is requested... no checking if 64bit Sparc VIS support is requested... no checking system version (for dynamic loading)... ./configure: 1: Syntax error: Unterminated quoted string tcl8.3.2 configuration failed! Exiting ... Tcl is not part of the ns project. Please see www.Scriptics.com to see if they have a fix for your platform. Anyone can help me?

    Read the article

  • Cross-Platform Migration using Heterogeneous Data Guard

    - by Roy F. Swonger
    Most people think of Data Guard as a disaster recovery solution, and it certainly excels in that role. However, did you know that you can also use Data Guard for platform migration under some conditions? While you would normally have your primary and standby Data Guard systems running on the same OS and hardware platform, there are some heterogeneous combinations of primary and stanby system that are supported by Data Guard Physical Standby. One example of heterogenous Data Guard support is the ability to go between Linux and Windows on many processor architectures. Another is the support for environments that are running HP-UX on both PA-RIsC and Itanium hardware. Brand new in 11.2.0.2 is the ability to have both SPARC Solaris and IBM AIX on Power Systems in the same Data Guard environment. See My Oracle Support note 413484.1 for all the details about supported platform combinations. So, why mention this in an upgrade blog? Simple: much of the time required for a platform migration is usually spent copying files from one system to another. If you are moving between systems that are supported by heterogenous Data Guard, then you can reduce that migration downtime to a matter of minutes. This can be a big win when downtime is at a premium (and isn't downtime always at a premium? In addition, you get the benefit of being able to keep the old and new environments synchronized until you are sure the migration is successful! A great case study of using Data Guard for a technology refresh is located on this OTN page. The case study showing CERN's methodology isn't highlighted as a link on the overview page, but it is clickable. As always, make sure you are fully versed on the details and restrictions by reading the available documentation and MOS notes. Happy migrating!

    Read the article

  • How to remove seams from a tile map in 3D?

    - by Grimshaw
    I am using my OpenGL custom engine to render a tilemap made with Tiled, using a well spread tileset from the web. There is nothing fancy going on. I load the TMX file from Tiled and generate vertex arrays and index arrays to render the tilemap. I am rendering this tilemap as a wall in my 3D world, meaning that I move around with a fly camera in my 3D world and at Z=0 there is a plane showing me my tiles. Everything is working correctly but I get ugly seems between the tiles. I've tried orthographic and perspective cameras and with either I found particular sets of parameters for the projection and view matrices where the artifacts did not show, but otherwise they are there 99% of the time in multiple patterns, depending on the zoom and camera parameters like field of view. Here's a screenshot of the artifact being shown: http://i.imgur.com/HNV1g4M.png Here's the tileset I am using (which Tiled also uses and renders correctly): http://i.imgur.com/SjjHK4q.png My tileset has no mipmaps and is set to GL_NEAREST and GL_CLAMP_TO_EDGE values. I've looked around many articles in the internet and nothing helped. I tried uv correction so the uv fall at half of the texel, rather than the end of the texel to prevent interpolating with the neighbour value(which is transparency). I tried debugging with my geometry and I verified that with no texture and a random color in each tile, I don't seem to see any seams. All vertices have integer coordinates, i.e, the first tile is a quad from (0,0) to (1,1) and so on. Tried adding a little offset both to the UV and to the vertices to see if the gaps cease to exist. Disabled multisampling too. Nothing fixed it so far. Thanks.

    Read the article

  • Wessty: Live with HTML 5 (2011 Speaker Tour)

    - by David Wesst
    That’s right: Wessty is on tour. Okay, the banner and the tour is a little over the top, but I am really excited about my upcoming speaking engagements to spread the word about HTML 5! I have already kicked off the tour with the Winnipeg Code Camp last weekend with the world premiere of HTML 5 for .NET Pro presentation, and the turn out fantastic. It was the last presentation of the day, but we still had some great questions about the new standard and got to see how HTML 5 can fit into .NET web applications today. In any case, above you can see the confirmed presentations that I will be doing so far in 2011, but there are a few more events that I have heard about that I hope to add to that list. Ultimately, expect that list to be updated over the course of the year as the year is young and there are plenty of conferences coming up! Presentation Resources As the tour continues, I will be posting the slides and the source code for the demonstrations up here on my site. They will be free of charge and give you the chance to review the demos and hopefully take advantage of some of the cool things you see in the presentations. Become part of the Tour If you are considering hosting an event where you think that HTML 5 could use a voice, drop me a line and let me know. I am always looking for opportunities to grow the tour to talk not just about HTML 5, but a variety of topics that relate to user interface and user experience development. This post also appears at http://david.wes.st

    Read the article

  • What's the best platform to publish documentation to internal users?

    - by serialhobbyist
    My team has a need to publish documentation internally. At the moment, it's spread all over the place and this means we often have to search everywhere to find something. We'd like to publish everything in one place. The main thing that stops us is access control - the wikis in place don't belong to us and we can't do it. What is the best tool for publishing docs, ideally fitting these requirements: web front end - readers access docs using browser single place to put docs access control by individual doc or by sets of docs (folders, branch of 'site', ...) if you don't have access to a doc, you don't see the link to that page/doc/folder. either built-in editor or something my users are familiar with (e.g. Word) built-in version control would be nice Also, can you think of other criteria I should've specified?

    Read the article

  • What's the best structure for a repository?

    - by jpmelos
    I've looked into many open source software repositories, and I've found some common elements and somethings people do in different fashion from one another. For example, every repository has a README file, a INSTALL file, a COPYING file and stuff like that. Other things differ: Some projects, like git, have their source code in the root level, while others have the source code in a src/ folder and others, like the Linux kernel, have the source code spread in different folders in root level, that divide code by areas; Some have their tests in a t/ folder, while others in a tests/ folder, or named otherwise; Some have files about submitting patches and who the maintainers are, and those might be inside some Documentation/ or in the root level. Are there recommendations? A best practice? For example: personally, I don't like the code in the root level, git-fashion. It looks messy and confuses one trying to start as a contributor (especially because they have some code inside folders, and scripts in the root level as well, it's really messy). If I were to start a project of my own and wanted to start right from the start, are there recommendations? Best practices? How can I make a clean and clear structure? Thank you!

    Read the article

  • can't access SAMBA shares on UBUNTU-server from other computers

    - by larand
    Installed UBUNTU-server 12.04 and configured /etc/samba/smb.conf as: #======================= Global Settings ======================= [global] workgroup = HEMMA server string = %h server (Samba, Ubuntu) security = user wins support = yes dns proxy = no log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d encrypt passwords = no passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user ############ Misc ############ usershare allow guests = yes #======================= Share Definitions ======================= [printers] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = yes create mask = 0700 # Windows clients look for this share name as a source of downloadable # printer drivers [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = no [Bilder original] comment = Original bilder path = /mnt/bilder/org browseable = yes read only = no guest ok = no create mask = 0755 [Bilder publika] comment = Bilder för allmän visning path = /mnt/bilder/public browseable = yes read only = yes guest ok = yes [Musik] comment = Musik path = /mnt/music/public browseable = yes read only = yes guest ok = yes I have a network setup around a 4G router "HUAWEI B593" where some computers are connected by WIFI and others by LAN. The server is connected by LAN. On one computer running windows XP I can see the server but are not allowed to acces them. On another computer on the WIFI-net running win7 I cannot see the server at all but I can ping the server and I can see the smb-protocoll is running when sniffing with wireshark. I don't primarily want to use passwords, computers on the lan and wifi should be able to connect without any login-procedure. I'm sure my config is not sufficient but have hard to understand how I should do. Theres a lot of descriptions on the net but most is old and none have been of any help. I'm also confused by the fact that I can not se the sever on my win7-machine even though it communicates with the samba-server. Would be very happy if anyone could spread some light over this mess.

    Read the article

  • Automate configuration change on Outlook 2007

    - by Julien Vehent
    I am migrating a bunch of mailboxes to google apps. Each user owns several mailboxes each serving different domains (john has [email protected], [email protected], and so on...) Currently, those accounts are hosted on (edit:NOT an exchange server) an old SMTP/POP server we want to replace, and I need to edit their outlook 2007 configuration to change the pop, smtp and password parameters. The hard way to do it is to connect to each outlook session and edit the parameters manually. I want to avoid that. Because that represents over 700 accounts spread between 40 users... :'( How can I automate this configuration change ? In the active directory ? Using a PRF file ? note: I'm a linux sysadmin with very little knowledge of windows's black magic.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >