Search Results

Search found 18808 results on 753 pages for 'security updates'.

Page 500/753 | < Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >

  • Can thousands of backlinks from the same site harm PageRank?

    - by Dejan Pelzel
    I just noticed that one particular site has almost 7000 backlinks linking back to our website. The site is something like a news aggregator and for each post they created around 20 (sometimes much more) backlinks back to our page and they basically linked over 400 pages. I am beginning to get concerned that this amount of links might harm our page. They seem to have more backlinks to our page than all the other pages combined and more backlinks that our website has pages. We have seen a massive negative effect going on for quite a while and the PageRank seems to have dropped to None (Not even 0). But I am not sure when and why exactly that happened seeing that PageRank updates take quite a while to appear. The site linking to us is otherwise pretty reputable and doesn't seem to be having any problems with their rank. (PR 6 actually) I was thinking of using the Google disavow tool for this site, but I don't want to make things even worse. Do you think these are harmful? If so, how do I fix this? Thanks :)

    Read the article

  • Updating physics for animated models

    - by Mathias Hölzl
    For a new game we have do set up a scene with a minimum of 30 bone animated models.(shooter) The problem is that the update process for the animated models takes too long. Thats what I do: Each character has ~30 bones and for every update tick the animation gets calculated and every bone fires a event with the new matrix. The physics receives the event with the new matrix and updates the collision shape for that bone. The time that it takes to build the animation isn't that bad (0.2ms for 30 Bones - 6ms for 30 models). But the main problem is that the physic engine (Bullet) uses a diffrent matrix for transformation and so its necessary to convert it. Code for matrix conversion: (~0.005ms) btTransform CLEAR_PHYSICS_API Mat_to_btTransform( Mat mat ) { btMatrix3x3 bulletRotation; btVector3 bulletPosition; XMFLOAT4X4 matData = mat.GetStorage(); // copy rotation matrix for ( int row=0; row<3; ++row ) for ( int column=0; column<3; ++column ) bulletRotation[row][column] = matData.m[column][row]; for ( int column=0; column<3; ++column ) bulletPosition[column] = matData.m[3][column]; return btTransform( bulletRotation, bulletPosition ); } The function for updating the transform(Physic): void CLEAR_PHYSICS_API BulletPhysics::VKinematicMove(Mat mat, ActorId aid) { if ( btRigidBody * const body = FindActorBody( aid ) ) { btTransform tmp = Mat_to_btTransform( mat ); body->setWorldTransform( tmp ); } } The real problem is the function FindActorBody(id): ActorIDToBulletActorMap::const_iterator found = m_actorBodies.find( id ); if ( found != m_actorBodies.end() ) return found->second; All physic actors are stored in m_actorBodies and thats why the updating process takes to long. But I have no idea how I could avoid this. Friendly greedings, Mathias

    Read the article

  • Webcast: Sun2Oracle: Upgrading from DSEE to the next generation Oracle Unified Directory

    - by Etienne Remillon
    Interested in upgrading from DSEE to OUD? Register to learn from one customer. Oracle Security Solutions Sun2Oracle: Upgrading from DSEE to the next generation Oracle Unified Directory Oracle Unified Directory (OUD) is the world’s first unified directory solution with highly integrated storage, synchronization, and proxy capabilities. These capabilities help meet the evolving needs of enterprise architectures. OUD customers can lower the cost of administration and ownership by maintaining a single directory for all of their enterprise needs, while also simplifying their enterprise architecture. OUD is optimized for mobile and cloud computing environments where elastic scalability becomes critical as service providers need a solution that can scale by dynamically adding more directory instances without re-architecting their solutions to support exponential business growth. Join us for this webcast and you will: Learn from one customer that has successfully upgraded to the new platform See what technology and business drivers influenced the upgrade Hear about the benefits of OUD’s elastic scalability and unparalleled performance Get additional information and resources for planning an upgrade Register here for the webcast. REGISTER NOW Register now for this complimentary webcast: Sun2Oracle: Upgrading from DSEE to the next generation Oracle Unified Directory Thursday September 13, 2012 10:00 a.m. PT / 1:00 p.m. ET

    Read the article

  • Book Review: Professional WCF 4

    - by Sam Abraham
    My Investigation of WCF internals have set the right stage to revisit Professional WCF 4 by Pablo Cibraro, Kurt Claeys, Fabio Cozzolino and Johann Grabner. In this book, the authors dive deep into all aspects of the WCF API in a reading targeted towards intermediate and advanced developers. Book quality so far as presentation, code completeness, content clarity and organization was superb. The authors have taken a hands-on approach to thoroughly covering the WCF 4.0 API with three chapters totaling 100+ pages completely dedicated to business cases with downloadable source code readily available. Chapter 1 outlines SOA best-practice considerations. Next three chapters take a top-down approach to the WCF API covering service and data contracts, bindings, clients, instancing and Workflow Services followed by another carefully-thought three chapters covering the security options available via the WCF API. In conclusion, Professional WCF 4.0 provides a thorough coverage of the WCF API and is a recommended read for anybody looking to reinforce their understanding of the various features available in the WCF framework. Many thanks to the Wiley/Wrox User Group Program for their support of our West Palm Beach Developers’ Group.   All the best, --Sam

    Read the article

  • Java EE 7 JSR Submitted

    - by Tori Wieldt
    Java EE 7 has been filed as JSR 342 in the JCP program. This JSR (Java Specification Request) will develop Java EE 7, the next version of the Java Platform, Enterprise Edition. It is an "umbrella JSR" because the specification includes a collection of several other JSRs. The proposal suggests the addition of two new JSRs: Concurrency Utilities for Java EE (JSR-236) and JCache (JSR-107) as well as updates to JPA, JAX-RS, JSF, Servlets, EJB, JSP, EL, JMS, JAX-WS, CDI, Bean Validation, JSR-330, JSR-250, and Java Connector Architecture. There are also two new APIs under discussion: a Java Web Sockets API and a Java JSON API. These are the new JSRs that are currently up for ballot:• JSR 342: Java Platform, Enterprise Edition 7 Specification• JSR 340: Java Servlet 3.1 Specification• JSR 341: Expression Language 3.0• JSR 343: Java Message Service 2.0• JSR 344: JavaServer Faces 2.2All 5 JSRs are now up for Executive Committee voting with ballots closing on 14 March, and slated for inclusion in Java EE  7.  All of these JSRs are also open for Expert Group nominations. Any JCP member can nominate themself to serve on the Expert Groups for these JSRs. Details on how to become a JCP member are on jcp.org. The JCP gives you a chance to have your own work become an official component of the Java platform and to offer suggestions for improving and growing the technology. Either way, everyone in the Java community benefits from your participation.There's a nice discussion about Java EE 7 in this podcast with Java EE spec lead Robert Chinnici and more information in this blog post on the Aquarium. It's exciting to see so much activity currently underway.

    Read the article

  • Big Data Appliance X4-2 Release Announcement

    - by Jean-Pierre Dijcks
    Today we are announcing the release of the 3rd generation Big Data Appliance. Read the Press Release here. Software Focus The focus for this 3rd generation of Big Data Appliance is: Comprehensive and Open - Big Data Appliance now includes all Cloudera Software, including Back-up and Disaster Recovery (BDR), Search, Impala, Navigator as well as the previously included components (like CDH, HBase and Cloudera Manager) and Oracle NoSQL Database (CE or EE). Lower TCO then DIY Hadoop Systems Simplified Operations while providing an open platform for the organization Comprehensive security including the new Audit Vault and Database Firewall software, Apache Sentry and Kerberos configured out-of-the-box Hardware Update A good place to start is to quickly review the hardware differences (no price changes!). On a per node basis the following is a comparison between old and new (X3-2) hardware: Big Data Appliance X3-2 Big Data Appliance X4-2 CPU 2 x 8-Core Intel® Xeon® E5-2660 (2.2 GHz) 2 x 8-Core Intel® Xeon® E5-2650 V2 (2.6 GHz) Memory 64GB 64GB Disk 12 x 3TB High Capacity SAS 12 x 4TB High Capacity SAS InfiniBand 40Gb/sec 40Gb/sec Ethernet 10Gb/sec 10Gb/sec For all the details on the environmentals and other useful information, review the data sheet for Big Data Appliance X4-2. The larger disks give BDA X4-2 33% more capacity over the previous generation while adding faster CPUs. Memory for BDA is expandable to 512 GB per node and can be done on a per-node basis, for example for NameNodes or for HBase region servers, or for NoSQL Database nodes. Software Details More details in terms of software and the current versions (note BDA follows a three monthly update cycle for Cloudera and other software): Big Data Appliance 2.2 Software Stack Big Data Appliance 2.3 Software Stack Linux Oracle Linux 5.8 with UEK 1 Oracle Linux 6.4 with UEK 2 JDK JDK 6 JDK 7 Cloudera CDH CDH 4.3 CDH 4.4 Cloudera Manager CM 4.6 CM 4.7 And like we said at the beginning it is important to understand that all other Cloudera components are now included in the price of Oracle Big Data Appliance. They are fully supported by Oracle and available for all BDA customers. For more information: Big Data Appliance Data Sheet Big Data Connectors Data Sheet Oracle NoSQL Database Data Sheet (CE | EE) Oracle Advanced Analytics Data Sheet

    Read the article

  • SAP Applications Certified for Oracle SPARC SuperCluster

    - by Javier Puerta
    SAP applications are now certified for use with the Oracle SPARC SuperCluster T4-4, a general-purpose engineered system designed for maximum simplicity, efficiency, reliability, and performance. "The Oracle SPARC SuperCluster is an ideal platform for consolidating SAP applications and infrastructure," says Ganesh Ramamurthy, vice president of engineering, Oracle. "Because the SPARC SuperCluster is a pre-integrated engineered system, it enables data center managers to dramatically reduce their time to production for SAP applications to a fraction of what a build-it-yourself approach requires and radically cuts operating and maintenance costs." SAP infrastructure and applications based on the SAP NetWeaver technology platform 6.4 and above and certified with Oracle Database 11g Release 2, such as the SAP ERP application and SAP NetWeaver Business Warehouse, can now be deployed using the SPARC SuperCluster T4 4. The SPARC SuperCluster T4-4 provides an optimized platform for SAP environments that can reduce configuration times by up to 75 percent, reduce operating costs up to 50 percent, can improve query performance by up to 10x, and can improve daily data loading up to 4x. The Oracle SPARC SuperCluster T4-4 is the world's fastest general purpose engineered system, delivering high performance, availability, scalability, and security to support and consolidate multi-tier enterprise applications with Web, database, and application components. The SPARC SuperCluster T4-4 combines Oracle's SPARC T4-4 servers running Oracle Solaris 11 with the database optimization of Oracle Exadata, the accelerated processing of Oracle Exalogic Elastic Cloud software, and the high throughput and availability of Oracle's Sun ZFS Storage Appliance all on a high-speed InfiniBand backplane. Part of Oracle's engineered systems family, the SPARC SuperCluster T4-4 demonstrates Oracle's unique ability to innovate and optimize at every layer of technology to simplify data center operations, drive down costs, and accelerate business innovation. For more details, refer to Our press release Datasheet: Oracle's SPARC SuperCluster T4-4 (PDF) Datasheet: Oracle's SPARC SuperCluster Now Supported by SAP (PDF) Video Podcast: Oracle's SPARC SuperCluster (MP4)

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • How can I convert the Nvidia driver installer into a deb?

    - by Oli
    Every so often there's a beta version of the Nvidia driver that I want to try out. This has happened today: there's been a big performance issue with version 295.40 and I want to try the shiny new XRandR-enabled 302.07. I'm more than able to download the installer, remove all the repo-installed driver files and install the new version but it's frankly a pain in the bottom to turn that around and go back to the repo version. It also means I have to re-install the driver manually each time there's a Kernel upgrade. The other option we commonly give people is a PPA but in this case I'm being really impatient. It's going to be a few days before any PPA gets this but I need to try this today. I've already manually installed it on the media centre and I'm eyeing up my desktop now. So how do I take an installer (eg: NVIDIA-Linux-x86-302.07.run) and convert that into a new nvidia-current/nvidia-current-updates package? Another way of asking this might be: How do people package the Nvidia drivers?

    Read the article

  • Failed to spawn test

    - by Lost
    Running a simple test in Ubuntu 12.04: sudo lxc-execute -n test /bin/bash -l debug -o outout Got error message: lxc-execute: failed to spawn 'test' cat outout: lxc-execute 1347053658.113 DEBUG lxc_start - sigchild handler set lxc-execute 1347053658.113 INFO lxc_start - 'test' is initialized lxc-execute 1347053658.366 DEBUG lxc_start - Dropping cap_sys_boot and watching utmp lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/' (rootfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/sys' (sysfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/proc' (proc) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/dev' (devtmpfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/dev/pts' (devpts) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/' (ext3) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/fs/fuse/connections' (fusectl) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/kernel/debug' (debugfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/kernel/security' (securityfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/lock' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/shm' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/rpc_pipefs' (rpc_pipefs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/scratch/WAMC-Simulation' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/share' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/proj/WAMC-Simulation' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/users/bhu' (nfs) lxc-execute 1347053658.367 ERROR lxc_start - failed to spawn 'test' Run command: sudo lxc-checkconfig Kernel config /proc/config.gz not found, looking in other places... Found kernel config file /boot/config-2.6.38.7-1.0emulab --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig What's the problem? Thanks a lot

    Read the article

  • HDMI video output not working for external monitor

    - by user291852
    I have installed from scratch ubuntu GNOME 14.04 with gnome-flashback on my old HP HDX 16 laptop (core 2 duo p8600 + nvidia GT9600M + 4GB of ram) and I have problems with the HDMI output (I use it to extend my desktop on a dell U2412M 1920x1200 monitor). In the following I summarize the configurations that I have tried: Using the open source nouveau drivers, only the laptop monitor works, no signal from the external monitor connected to the HDMI output. However, the output of the xrandr command show that the HDMI output is connected with the correct resolution 1920x1200 (I find this thing really weird). Nouveau drivers with VGA connection works without problems on the external monitor, but the image is blurry compared to the HDMI connection. Using the nvidia drivers (I have tried different versions: nvidia-331-updates and the xorg-edgers versions nvidia-334 and nvidia-337) the HDMI output works, but I have system instabilities, random crashes and display freezes. I can't even enter in terminal mode with ctrl-alt-f1, so I have to manually shut down the laptop with the power button. I really would like to use the HDMI output with the nouveau drivers to avoid the system instabilities that I experienced with the nvidia drivers, but I can't figure out how to make it works. Alessandro

    Read the article

  • Can't Mount Phone, "according to mtab, /dev/sdb1 is already mounted on /"

    - by RPG Master
    My myTouch Slide wasn't mounting, so I decided to open Disk Utility. My phone shows up but when I click "Mount" it gives me this error: Error mounting: mount exited with exit code 1: helper failed with: mount: according to mtab, /dev/sdb1 is already mounted on / mount failed Here's my mtab: /dev/sdb1 / ext4 rw,errors=remount-ro,commit=0 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 none /sys sysfs rw,noexec,nosuid,nodev 0 0 fusectl /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 none /dev devtmpfs rw,mode=0755 0 0 none /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 none /dev/shm tmpfs rw,nosuid,nodev 0 0 none /var/run tmpfs rw,nosuid,mode=0755 0 0 none /var/lock tmpfs rw,noexec,nosuid,nodev 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/matthew/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=matthew 0 0 /dev/sdg1 /media/Seagate\040GoFlex ext4 rw,nosuid,nodev,uhelper=udisks 0 0 EDIT: Here's my fstab: # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/sda1 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=3b0db205-2bdb-4c98-a506-6bdd3520d540 none swap sw 0 0

    Read the article

  • Domain forwarding to a IE "trusted site" opens a blank page

    - by Michael Jasper
    My employer, a University, regularly hosts conferences and other events. While websites for these sites are hosted on our domain, they frequently request customized .com urls. We then forward these domains to the specific site. Recently, we discovered a problem, where a page will not load if the following conditions are met(using a current example): website is created on our CMS for a conference http://continue.weber.edu/nulc a domain is created http://www.nulc2012.com and forwarded to http://continue.weber.edu/nulc The user enters http://www.nulc2012.com into their address bar using IE7 or IE8 The user has *.weber.edu listed as a "trusted site" in IE security settings (the case for nearly all on-campus computers) When this happens, their browser will correctly transfer to the page http://continue.weber.edu/nulc/index.php, however the page is blank, returning only: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML><HEAD> <META content="text/html; charset=windows-1252" http-equiv=Content-Type></HEAD> <BODY></BODY></HTML> Is there any know solution to this problem? Or am I missing something completely? Note: Tested websites do load correctly in Chrome, Firefox, and Safari

    Read the article

  • Dependency errors on installing Banshee

    - by Ben Cracknell
    I just installed Ubuntu 12.10 (Verified the ISO hash as well). The VERY first thing I did was open the software centre and try to install banshee. I am met with the following error: The following packages have unmet dependencies: banshee: Depends: libc6 (>= 2.7) but 2.15-0ubuntu20 is to be installed Depends: libglib2.0-0 (>= 2.34.1) but 2.34.0-1ubuntu1 is to be installed Depends: libgtk2.0-0 (>= 2.24.0) but 2.24.13-0ubuntu2 is to be installed Depends: libsoup-gnome2.4-1 (>= 2.27.4) but 2.40.0-0ubuntu1 is to be installed Depends: libsoup2.4-1 (>= 2.26.1) but 2.40.0-0ubuntu1 is to be installed Depends: libx11-6 (>= 2:1.4.99.1) but 2:1.5.0-1 is to be installed Depends: mono-runtime (>= 2.10.1) but 2.10.8.1-5ubuntu1 is to be installed Depends: libc0.1 (>= 2.15) but it is not going to be installed Depends: libgconf2.0-cil (>= 2.24.0) but 2.24.2-2 is to be installed Depends: libgdk-pixbuf2.0-0 (>= 2.26.4) but 2.26.4-0ubuntu1 is to be installed Depends: libglib2.0-cil (>= 2.12.10-1ubuntu1) but 2.12.10-4 is to be installed Depends: libgtk2.0-cil (>= 2.12.10-1ubuntu1) but 2.12.10-4 is to be installed Depends: libmono-cairo4.0-cil (>= 2.10.1) but 2.10.8.1-5ubuntu1 is to be installed Depends: libmono-corlib4.0-cil (>= 2.10.1) but 2.10.8.1-5ubuntu1 is to be installed Depends: libmono-posix4.0-cil (>= 2.10.1) but 2.10.8.1-5ubuntu1 is to be installed Depends: libmono-system-core4.0-cil (>= 2.10.3) but 2.10.8.1-5ubuntu1 is to be installed Depends: libmono-system4.0-cil (>= 2.10.7) but 2.10.8.1-5ubuntu1 is to be installed Depends: gnome-icon-theme (>= 2.16) but 3.6.0-0ubuntu2 is to be installed I should note that the banshee application appears three times when searching for it: http://i.imgur.com/fJOsb.png Other applications install fine though. I installed the latest updates and still received the same error. I even tried reinstalling Ubuntu, but the same thing happened.

    Read the article

  • Turning laptop into WAP using netgear WNA1100? (stuck at hostapd)

    - by Vivek Sharma
    I have a Netgear WNA1100 usb wifi adapter. I have installed Atheros driver from Forum Details (btw name of the file is ath9k_htc-installer.1.0.1-maverick-fixed.deb). I wish to make a setup like connectify(windows) on ubuntu, so that I can connect my phone wirelessly to my laptop via Netgear WNA1100 (behaving as AP) and eventually use internet via my wired lan. I have installed the above mentioned driver, hostapd and hostap-utils. Following is my hostapd.conf file. ssid=vks interface=wlan1 # The interface name of the card driver=ath9k_htc # The card driver macaddr_acl=0 accept_mac_file=/etc/hostapd.accept deny_mac_file=/etc/hostapd.deny ieee80211x=1 # Use 802.1X authentication auth_algs=1 ignore_broadcast_ssid=0 wpa=2 wpa_passphrase=88888888 wpa_key_mgmt=WPA-PSK wpa_pairwise=TKIP rsn_pairwise=CCMP When i run sudo hostapd /etc/hostapd/hostapd.conf I get an error invalid/unknown driver 'ath9k_htc # The card driver I think the driver is installed fine, as i can see the blue led blinking on the netgear adapter, which was not blinking earlier. Can someone please guide me how to achieve this setup? I will appreciate an example hostapd.conf file with a simple wpa_psk security setup. Please be detailed and descriptive with commands. How to run and end it. Following is output from lsmod, i have only pasted the entries which had ath and ath related info. Which driver shall i use. Module Size Used by ath9k_htc 42903 0 ath9k_common 2563 1 ath9k_htc ath9k_hw 285176 2 ath9k_htc,ath9k_common ath 13001 2 ath9k_htc,ath9k_hw cfg80211 139811 3 ath9k_htc,mac80211,ath compat 4020 1 cfg80211 led_class 2633 3 ath9k_htc,thinkpad_acpi,sdhci Thanks.

    Read the article

  • The Oracle MDM Portfolio & Strategy Session - It All Comes Down to Master Data

    - by Mala Narasimharajan
     By Narayana Machiraju We are less than a week now from the start of Oracle Open World 2012 and I would like to introduce you all to one of the most awaited MDM strategy sessions this year titled “What’s there to Know about Oracle’s Master Data Management Portfolio and Roadmap?”. Manouj Tahiliani, Senior Director of MDM Product Strategy provides you a complete picture of the Oracle MDM Portfolio, the Product releases, the Strategy and the Roadmaps. Manoj will be discussing Oracle Fusion MDM applications, the first enterprise-grade SaaS MDM product suite. You’ll hear strategies for leveraging MDM and data quality in the enterprise and how you can derive business value by deploying an MDM foundation for strategic initiatives such as customer experience management, product innovation, and financial transformation. And as a bonus, he is also going to discuss the confluence of MDM with emerging technologies such as big data, social, and mobile. The session is co-presented by GEHC and Westpac. Tony Craddock from Westpac is going to share the insights of their MDM Implementation in the lines of Business drivers, data governance, ROI and other important implementation considerations. A reprsentative from GEHC is going to talk about their MDM journey and the multi-domain MDM story. I strongly recommend yo not miss this important session The MDM track at Oracle Open World covers variety of topics related to MDM. In addition to the product management team presenting product updates and roadmap, we have several Customer Panels, Conference sessions and Customer round table sessions featuring a lot of marquee Customers. You can see an overview of MDM sessions here. 

    Read the article

  • CodePlex Daily Summary for Wednesday, May 30, 2012

    CodePlex Daily Summary for Wednesday, May 30, 2012Popular ReleasesOMS.Ice - T4 Text Template Generator: OMS.Ice - T4 Text Template Generator v1.4.0.14110: Issue 601 - Template file name cannot contain characters that are not allowed in C#/VB identifiers Issue 625 - Last line will be ignored by the parser Issue 626 - Usage of environment variables and macrosSilverlight Toolkit: Silverlight 5 Toolkit Source - May 2012: Source code for December 2011 Silverlight 5 Toolkit release.totalem: version 2012.05.30.1: Beta version added function for mass renaming files and foldersAudio Pitch & Shift: Audio Pitch and Shift 4.4.0: Tracklist added on main window Improved performances with tracklist Some other fixesJson.NET: Json.NET 4.5 Release 6: New feature - Added IgnoreDataMemberAttribute support New feature - Added GetResolvedPropertyName to DefaultContractResolver New feature - Added CheckAdditionalContent to JsonSerializer Change - Metro build now always uses late bound reflection Change - JsonTextReader no longer returns no content after consecutive underlying content read failures Fix - Fixed bad JSON in an array with error handling creating an infinite loop Fix - Fixed deserializing objects with a non-default cons...DBScripterCmd - A command line tool to script database objects to seperate files: DBScripterCmd Source v1.0.2.zip: Add support for SQL Server 2005Indent Guides for Visual Studio: Indent Guides v12.1: Version History Changed in v12.1: Fixed crash when unable to start asynchronous analysis Fixed upgrade from v11 Changed in v12: background document analysis new options dialog with Quick Set selections for behavior new "glow" style for guides new menu icon in VS 11 preview control now uses editor theming highlighting can be customised on each line fixed issues with collapsed code blocks improved behaviour around left-aligned pragma/preprocessor commands (C#/C++) new setting...DotNetNuke® Community Edition CMS: 06.02.00: Major Highlights Fixed issue in the Site Settings when single quotes were being treated as escape characters Fixed issue loading the Mobile Premium Data after upgrading from CE to PE Fixed errors logged when updating folder provider settings Fixed the order of the mobile device capabilities in the Site Redirection Management UI The User Profile page was completely rebuilt. We needed User Profiles to have multiple child pages. This would allow for the most flexibility by still f...StarTrinity Face Recognition Library: Version 1.2: Much better accuracy????: ????2.0.1: 1、?????。WiX Toolset: WiX v3.6 RC: WiX v3.6 RC (3.6.2928.0) provides feature complete Burn with VS11 support. For more information see Rob's blog post about the release: http://robmensching.com/blog/posts/2012/5/28/WiX-v3.6-Release-Candidate-availableJavascript .NET: Javascript .NET v0.7: SetParameter() reverts to its old behaviour of allowing JavaScript code to add new properties to wrapped C# objects. The behavior added briefly in 0.6 (throws an exception) can be had via the new SetParameterOptions.RejectUnknownProperties. TerminateExecution now uses its isolate to terminate the correct context automatically. Added support for converting all C# integral types, decimal and enums to JavaScript numbers. (Previously only the common types were handled properly.) Bug fixe...callisto: callisto 2.0.29: Added DNS functionality to scripting. See documentation section for details of how to incorporate this into your scripts.Phalanger - The PHP Language Compiler for the .NET Framework: 3.0 (May 2012): Fixes: unserialize() of negative float numbers fix pcre possesive quantifiers and character class containing ()[] array deserilization when the array contains a reference to ISerializable parsing lambda function fix round() reimplemented as it is in PHP to avoid .NET rounding errors filesize bypass for FileInfo.Length bug in Mono New features: Time zones reimplemented, uses Windows/Linux databaseSharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.1): New fetures:Admin enable / disable match Hide/Show Euro 2012 SharePoint lists (3 lists) Installing SharePoint Euro 2012 PredictorSharePoint Euro 2012 Predictor has been developed as a SharePoint Sandbox solution to support SharePoint Online (Office 365) Download the solution havivi.euro2012.wsp from the download page: Downloads Upload this solution to your Site Collection via the solutions area. Click on Activate to make the web parts in the solution available for use in the Site C...????SDK for .Net 4.0+(OAuth2.0+??V2?API): ??V2?SDK???: ????SDK for .Net 4.X???????PHP?SDK???OAuth??API???Client???。 ??????API?? ???????OAuth2.0???? ???:????????,DEMO??AppKey????????????????,?????AppKey,????AppKey???????????,?????“????>????>????>??????”.Net Code Samples: Code Samples: Code samples (SLNs).LINQ_Koans: LinqKoans v.02: Cleaned up a bitCommonLibrary.NET: CommonLibrary.NET 0.9.8 - Final Release: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. FluentscriptCommonLibrary.NET 0.9.8 contains a scripting language called FluentScript. Application: FluentScript Version: 0.9.8 Build: 0.9.8.4 Changeset: 75050 ( CommonLibrary.NET ) Release date: May 24, 2012 Binaries: CommonLibrary.dll Namespace: ComLib.Lang Project site: http://fluentscript.codeplex.com...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.0 RC1 Refresh 2: JayData is a unified data access library for JavaScript developers to query and update data from different sources like webSQL, indexedDB, OData, Facebook or YQL. See it in action in this 6 minutes video: http://www.youtube.com/watch?v=LlJHgj1y0CU RC1 R2 Release highlights Knockout.js integrationUsing the Knockout.js module, your UI can be automatically refreshed when the data model changes, so you can develop the front-end of your data manager app even faster. Querying 1:N relations in W...New Projects5Widgets: 5Widgets is a framework for building HTML5 canvas interfaces. Written in JavaScript, 5Widgets consists of a library of widgets and a controller that implements the MVC pattern. Though the HTML5 standard is gaining popularity, there is no framework like this at the moment. Yet, as a professional developer, I know many, including myself, would really find such a library useful. I have uploaded my initial code, which can definitely be improved since I have not had the time to work on it fu...Azure Trace Listener: Simple Trace Listener outputting trace data directly to Windows Azure Queue or Table Storage. Unlike the Windows Azure Diagnostics Listener (WAD), logging happens immediately and does not rely on (scheduled or manually triggered) Log Transfer mechanism. A simple Reader application shows how to read trace entries and can be used as is or as base for more advanced scenarios.CodeSample2012: Code Sample is a windows tool for saving pieces of codeEncryption: The goal of the Encryption project is to provide solid, high quality functionality that aims at taking the complexity out of using the System.Security.Cryptography namespace. The first pass of this library provides a very strong password encryption system. It uses variable length random salt bytes with secure SHA512 cryptographic hashing functions to allow you to provide a high level of security to the users. Entity Framework Code-First Automatic Database Migration: The Entity Framework Code-First Automatic Database Migration tool was designed to help developers easily update their database schema while preserving their data when they change their POCO objects. This is not meant to take the place of Code-First Migrations. This project is simply designed to ease the development burden of database changes. It grew out of the desire to not have to delete, recreated, and seed the database every time I made an object model change. Function Point Plugin: Function Point Tracability Mapper VSIX for Visual Studio 2010/TFS 2010+FunkOS: dcpu16 operating systemGit for WebMatrix: This is a WebMatrix Extension that allows users to access Git Source Control functions.Groupon Houses: the groupon site for housesLiquifier - Complete serialisation/deserialisation for complex object graphs: Liquifier is a serialisation/deserialisation library for preserving object graphs with references intact. Liquifier uses attributes and interfaces to allow the user to control how a type is serialised - the aim is to free the user from having to write code to serialise and deserialise objects, especially large or complex graphs in which this is a significant undertaking.MTACompCommEx032012: lak lak lakMVC Essentials: MVC Essentials is aimed to have all my learning in the projects that I have worked.MyWireProject: This project manages wireless networks.Peulot Heshbon - Hebrew: This program is for teaching young students math, until 6th grade. The program gives questions for the user. The user needs to answer the question. After 10 questions the user gets his mark. The marks are saved and can be viewed from every run. PlusOne: A .NET Extension and Utility Library: PlusOne is a library of extension and utility methods for C#.Project Support: This project is a simple project management solution. This will allow you to manage your clients, track bug reports, request additional features to projects that you are currently working on and much more. A client will be allowed to have multiple users, so that you can track who has made reports etc and provide them feedback. The solution is set-up so that if you require you can modify the styling to fit your companies needs with ease, you can even have multiple styles that can be set ...SharePoint 2010 Slide Menu Control: Navigation control for building SharePoint slide menuSIGO: 1 person following this project (follow) Projeto SiGO O Projeto SiGO (Sistema de Gerenciamento Odontologico) tem um Escorpo Complexo com varios programas e rotinas, compondo o modulo de SAC e CRM, Finanças, Estoque e outro itens. Coordenador Heitor F Neto : O Projeto SiGo desenvolvido aqui no CodePlex e open source, sera apenas um Prototipo, assim que desenvolmemos os modulos basicos iremos migrar para um servidor Pago e com segurança dos Codigo Fonte e Banco de Dados. Pessoa...STEM123: Windows Phone 7 application to help people find and create STEM Topic details.TIL: Text Injection and Templating Library: An advanced alternative to string.Format() that encapsulates templates in objects, uses named tokens, and lets you define extra serializing/processing steps before injecting input into the template (think, join an array, serialize an object, etc).UAH Exchange: Ukrainian hrivna currency exchangeuberBook: uberBook ist eine Kontakt-Verwaltung für´s Tray. Das Programm syncronisiert sämtliche Kontakte mit dem Internet und sucht automatisch nach Social-Network Profilen Ihrer KontakteWPF Animated GIF: A simple library to display animated GIF images in WPF, usable in XAML or in code.

    Read the article

  • OSB and Ubuntu 10.04 - Too Many Open Files

    - by jeff.x.davies
    When installing the latest Oracle Service Bus (11gR1PS3) onto my Ubuntu 10.04 system, the Eclipse IDE was complaining about there being too many open files. The Oracle Service Bus and the Oracle Enterprise Pack for Eclipse (aka OEPE) do make use of ALOT of files. By default, Ubuntu will restrict each user to 1024 open files. A much more realistic number for OSB development is 4096. Changing the file limit in Ubuntu is fairly simple (if arcane). You will need to modify two different files and then restart your server. First, you need to modify the limits.conf file as the root user. Open a terminal window and enter the following command: sudo gedit /etc/security/limits.conf Add the following 2 lines to the file. The asterisk simply means that the rule will apply to all users. * soft nofile 4096 * hard nofile 4096 Save your changes and close gedit. The second file to change is the common-session file. Use the following command: sudo gedit /etc/pam.d/common-session Add the following line: session required pam_limits.so Save the file and exit gedit. Restart your machine. You shouldn't have any more problems with too many open files anymore.

    Read the article

  • Where is a good place to start to learn about custom caching in .Net

    - by John
    I'm looking to make some performance enhancements to our site, but I'm not sure exactly where to begin. We have some custom object caching, but I think that we can do better. Our Business We aggregate news stories on a news type of web site. We get approximately 500-1000 new stories per week. We have index pages that show various lists of the items and details pages that show the individual stories. Our Current Use case: Getting an Individual Story User makes a request The Data Access Layer(DAL) checks to see if the item is in cache and if item is fresh (15 minutes). If the item is not in cache or is not fresh, retrieve the item from SQL Server, save to cache and return to user. Problems with this approach The pull nature of caching means that users have to pay the waiting cost every time that the cache is refreshed. Once a story is published, it changes infrequently and I think that we should replace the pull model with something better. My initial thoughts My initial thought is that stories should ALL be stored locally in some type of dictionary. (Cache or is there another, better way?). If the story is not found, then make a trip to the database, update the local dictionary and send the item back. Since there may be occasional updates to stories, this should be an entirely process from the user. I watched a video by Brent Ozar, How StackOverflow Scales SQL Server, in which Brent states "the fastest database query is the one that you don't make". Where do I start? At this point, I don't know exactly what the solution is. Is it caching? Is there a better way of using local storage? Do I use a Dictionary, OrderedDictionary, List ? It seems daunting and I'm just looking for some good starting points to learn more about how to do this type of optimization.

    Read the article

  • Bookmark Sentry Scans Your Chrome Bookmarks File For Bad Links and Dupes

    - by Jason Fitzpatrick
    Chrome: Bookmark Sentry, a free Chrome extension, takes the hard work out of checking your bookmark file for bad links and duplicates. Install it, forget about it, and get scheduled reports on the state of your bookmarks file. It’s that simple. Once you install the extension, open the options to toggle some basic settings to your liking (like the frequency of the scan, how long you want it to wait for a response, and whether you want it to look for bad links and/or duplicates). Once it finishes scanning you’ll get a report indicating the status of the links (why they are marked as missing or duped) and the ability to selectively or mass delete them. The only caveat we’d share is that it will tell you links behind any sort of security are unavailable. If you bookmark pages that you use for work, behind your corporate firewall for example, if the scanner runs when you’re not authenticated then it won’t be able to reach them. Other than that, it works like a charm. Bookmark Sentry is free, Google Chrome only. Bookmark Sentry [via Addictive Tips] How to Own Your Own Website (Even If You Can’t Build One) Pt 1 What’s the Difference Between Sleep and Hibernate in Windows? Screenshot Tour: XBMC 11 Eden Rocks Improved iOS Support, AirPlay, and Even a Custom XBMC OS

    Read the article

  • Should a programmer "think" for the client?

    - by P.Brian.Mackey
    I have gotten to the point where I hate requirements gathering. Customer's are too vague for their own good. In an agile environment, where we can show the client a piece of work to completion it's not too bad as we can make small regular corrections/updates to functionality. In a "waterfall" type in environment (requirements first, nearly complete product next) things can get ugly. This kind of environment has led me to constantly question requirements. E.G. Customer wants "automatically convert input to the number 1" (referring to a Qty in an order). But what they don't think about is that "input" could be a simple type-o. An "x" in a textbox could be a "woops" not I want 1 of those "toothpaste" products. But, there's so much in the air with requirements that I could stand and correct for hours on end smashing out what they want. This just isn't healthy. Working for a corporation, I could try to adjust the culture to fit the agile model that would help us (no small job, above my pay grade). Or, sweep ugly details under the rug and hope for the best. Maybe my customer is trying to get too close to the code? How does one handle the problem of "thinking for the client" without pissing them off with too many questions?

    Read the article

  • Remap keyboard Ubuntu 12.04; Asus Q500A

    - by hydroxide
    I have an Asus Q500A with win8 and Ubuntu 12.04 64 bit; linux kernel 3.8.0-32-generic. I am using gnome-panel, and xserver-xorg-lts-raring. I have been experiencing problems with the keyboard short-cuts since I had a fresh install. fn+f10 is supposed to mute my system, but instead it will repeatedly press d. fn+f11 is volume down, but it presses c. fn+f12 is volume up, presses b repeatedly. Most of the other on-board short-cuts such as adjusting screen and led brightness work most of the time, but sometimes press other letters repeatedly. Also, sometimes my cntr gets held down for no reason. Everything works fine in windows. I have tried installing all recommends and sudo dpkg-reconfigure -a to reconfigure all packages, which did not solve my problem. I have tried using KeyTouch editor to edit keymaps, navigating to /usr/shar/x11/xkb/keymap when I try opening any of these files it says file contains no keyboard element. I think If I were just able to remap my keyboard it might solve my issues, otherwise if anyone knows where I can get asus drivers for 12.04 please let me know Apparently I didn't have all repositories enabled. I executed the following commands and am trying the updates they give me. Getting linux_kernel 3.8.0-33 generic as well as a bunch of other packages. sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) universe" sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) main universe restricted multiverse" sudo add-apt-repository "deb http://archive.canonical.com/ubuntu $(lsb_release -sc) partner"

    Read the article

  • How to resolve "dpkg: error processing /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb"?

    - by raz7588
    Update Manager will not update although I have over 100 updates to do I get a error message like this: installArchives() failed: Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... (Reading database ... (Reading database ... 5%% (Reading database ... 10%% (Reading database ... 15%% (Reading database ... 20%% (Reading database ... 25%% (Reading database ... 30%% (Reading database ... 35%% (Reading database ... 40%% (Reading database ... 45%% (Reading database ... 50%% (Reading database ... 55%% (Reading database ... 60%% (Reading database ... 65%% (Reading database ... 70%% (Reading database ... 75%% (Reading database ... 80%% (Reading database ... 85%% (Reading database ... 90%% (Reading database ... 95%% (Reading database ... 100%% (Reading database ... 189751 files and directories currently installed.) Preparing to replace python-problem-report 2.0.1-0ubuntu7 (using .../python-problem-report_2.0.1-0ubuntu9_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-problem-report_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace python-apport 2.0.1-0ubuntu7 (using .../python-apport_2.0.1-0ubuntu9_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace apport 2.0.1-0ubuntu7 (using .../apport_2.0.1-0ubuntu9_all.deb) ... apport stop/waiting Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/apport_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already apport start/running Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace gnome-orca 3.4.1-0ubuntu0.1 (using .../gnome-orca_3.4.2-0ubuntu0.1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/gnome-orca_3.4.2-0ubuntu0.1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace python-piston-mini-client 0.7.2-0ubuntu1 (using .../python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace oneconf 0.2.8 (using .../oneconf_0.2.8.1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/oneconf_0.2.8.1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace software-center 5.2.2 (using .../software-center_5.2.2.2_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/software-center_5.2.2.2_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace libglade2-0 1:2.6.4-1ubuntu1 (using .../libglade2-0_1%%3a2.6.4-1ubuntu1.1_amd64.deb) ... Unpacking replacement libglade2-0 ... Preparing to replace libv4l-0 0.8.6-1ubuntu1 (using .../libv4l-0_0.8.6-1ubuntu2_amd64.deb) ... De-configuring libv4l-0:i386 ... Unpacking replacement libv4l-0 ... Preparing to replace libv4l-0:i386 0.8.6-1ubuntu1 (using .../libv4l-0_0.8.6-1ubuntu2_i386.deb) ... Unpacking replacement libv4l-0:i386 ... Preparing to replace libv4lconvert0:i386 0.8.6-1ubuntu1 (using .../libv4lconvert0_0.8.6-1ubuntu2_i386.deb) ... De-configuring libv4lconvert0 ... Unpacking replacement libv4lconvert0:i386 ... Preparing to replace libv4lconvert0 0.8.6-1ubuntu1 (using .../libv4lconvert0_0.8.6-1ubuntu2_amd64.deb) ... Unpacking replacement libv4lconvert0 ... Errors were encountered while processing: /var/cache/apt/archives/python-problem-report_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/apport_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/gnome-orca_3.4.2-0ubuntu0.1_all.deb /var/cache/apt/archives/python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb /var/cache/apt/archives/oneconf_0.2.8.1_all.deb /var/cache/apt/archives/software-center_5.2.2.2_all.deb Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up libglade2-0 (1:2.6.4-1ubuntu1.1) ... dpkg: error processing gnome-orca (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. dpkg: error processing python-problem-report (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Setting up libv4lconvert0 (0.8.6-1ubuntu2) ... Setting up libv4lconvert0:i386 (0.8.6-1ubuntu2) ... dpkg: error processing python-piston-mini-client (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Setting up libv4l-0 (0.8.6-1ubuntu2) ... Setting up libv4l-0:i386 (0.8.6-1ubuntu2) ... dpkg: dependency problems prevent configuration of python-apport: python-apport depends on python-problem-report (>= 0.94); however: Package python-problem-report is not configured yet. dpkg: error processing python-apport (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of software-center: software-center depends on python-piston-mini-client (>= 0.1+bzr29); however: Package python-piston-mini-client is not configured yet. dpkg: error processing software-center (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of oneconf: oneconf depends on python-piston-mini-client (>= 0.3+bzr32-0ubuntu1); however: Package python-piston-mini-client is not configured yet. dpkg: error processing oneconf (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of apport: apport depends on python-apport (>= 2.0.1-0ubuntu7); however: Package python-apport is not configured yet. dpkg: error processing apport (--configure): dependency problems - leaving unconfigured Processing triggers for libc-bin ... ldconfig deferred processing now taking place This has been going on for two weeks now and I cannot get any updates. Any help would be great.

    Read the article

  • Announcing Oracle Environmental Accounting and Reporting

    - by Theresa Hickman
    Oracle just launched a new product called Oracle Environmental Accounting and Reporting designed to help company’s track and report their greenhouse emissions at the operational level.  Companies around the world are facing increasing pressure to improve their energy efficiency and reduce waste in their operations. Also, new worldwide greenhouse gas legislation is putting added pressure on companies to report the impact of their emissions and energy usage on the environment. Today, companies undergo extensive and expensive data audits to maintain a ledger of up-to-date emissions factors that compare figures on an annual basis. Existing “ad hoc” approaches utilizing manual or niche solutions have a high operational cost and weak data security and audit-ability. The ideal solution is to embed environmental usage within the mainstream business operations, such as recording energy usage at the time of invoice entry, and then report on those results. This is precisely what Oracle Environmental Accounting and Reporting is designed to do. You can now capture environmental data either electronically or manually; convert that to greenhouse gas emissions; comply with mandatory and voluntary greenhouse gas reporting schemes; and identify opportunities for CO2 emissions and cost reductions.   Oracle recently acquired the intellectual property for this solution which works with both Oracle E-Business Suite Release 12 and JD Edwards EnterpriseOne Release 9.0. For more information, visit Oracle Environmental Accounting and Reporting.

    Read the article

  • Bug fix for Eclipse runtime plugin

    - by Peter Benedikovic
    This blog is intended to inform about bug fix that solves this issue. Before continuing further, one important note – the linux and mac users do not need to read further because this bug appears only on Windows.  The problem was that the runtime plugin registered new runtime and server each time the Eclipse started. Users ended up with server view looking like this: I have created new runtime plugin which is now available at the update site http://download.java.net/glassfish/eclipse/indigo (or the same ending with juno for Juno users). You will still need to unistall the buggy plugin and (optionally but recommended) to remove runtimes created by this plugin. Here is the guide how to install bugfix: Uninstall buggy runtime plugin via menu Help->About Eclipse->Installation details. Remove runtimes created by old plugin – via Window->Preferences->Server->Runtime Environment. After pressing remove button you may be asked if you want to remove also the servers based on runtime being removed. Recommended is to do so. Now you can install new runtime plugin. Go to Help->Install New Software. You may ask why I haven‘t provided the update for buggy runtime which could be installed via Check for updates feature of Eclipse. It has two main reasons: The bug fix is needed only for Windows users so I didn't want to bother other users by updating working plugin. The runtime plugin has had structure that was not quite suitable for Eclipse update. This structure is now changed so future bugs (I am sure that there will be no such ;)) can be fixed by standard update. Have a good one!

    Read the article

< Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >