Search Results

Search found 28559 results on 1143 pages for 'upgrade issue'.

Page 647/1143 | < Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >

  • Do ORMs enable the creation of rich domain models?

    - by Augusto
    After using Hibernate on most of my projects for about 8 years, I've landed on a company that discourages its use and wants applications to only interact with the DB through stored procedures. After doing this for a couple of weeks, I haven't been able to create a rich domain model of the application I'm starting to build, and the application just looks like a (horrible) transactional script. Some of the issues I've found are: Cannot navigate object graph as the stored procedures just load the minimum amount of data, which means that sometimes we have similar objects with different fields. One example is: we have a stored procedure to retrieve all the data from a customer, and another to retrieve account information plus a few fields from the customer. Lots of the logic ends up in helper classes, so the code becomes more structured (with entities used as old C structs). More boring scaffolding code, as there's no framework that extracts result sets from a stored procedure and puts it in an entity. My questions are: has anyone been in a similar situation and didn't agree with the store procedure approch? what did you do? Is there an actual benefit of using stored procedures? appart from the silly point of "no one can issue a drop table". Is there a way to create a rich domain using stored procedures? I know that there's the posibility of using AOP to inject DAOs/Repositories into entities to be able to navigate the object graph. I don't like this option as it's very close to voodoo.

    Read the article

  • Connection to website interrupted/reset. Why?

    - by Goje87
    When navigating to some websites (like victorsosea.com, citibank.co.in etc.) on Google Chrome, I get a message that the 'connection to www.site.com was interrupted'. If I try on Firefox, it says 'The connection to the server was reset'. This happens no matter what internet connection I use on my laptop. Be it my home internet or office internet. I have tried using Google DNS address as well but that does not seem to solve the problem. I have been facing this problem randomly for some websites. Kindly let me know how I can resolve the issue. My laptop configuration: Windows 7, 64-bit

    Read the article

  • USB port not recognising mouse on first bootup

    - by Pacifika
    On a computer here when first powered on the USB wired mouse is not recognised. The light under the mouse is not lit up. Other usb hub's and keyboards work fine. Disconnecting it and reconnecting it fixes the issue, even after a restart - after the pc is switched off for a length of time (for example overnight) the problem reappears. I swapped the mouse, updated the bios and installed updated intellimouse drivers, turned of power saving on the usb ports. Any ideas?

    Read the article

  • Complete Public Folder Migration from Exchange 2007 to Exchange 2010

    - by Michael Todd
    We were in the process of migrating from Exchange 2007 to Exchange 2010 and hit a brick wall when trying to migrate Public Folders. After resolving issues with connectivity (and another issue with an old Exchange 2003 server being listed in AD that was causing the replication to fail) it finally appeared that messages were migrating from one server to another. However, we appear to have jumped the gun and ran MoveAllReplicas before the process was complete. We are now stuck with about 210MB of public folders on the new server from a 7GB public folder store on the old server. The messages appear to be available on the old server since running get-publicfolderstatistics shows that there are messages available. We have waited several days for the move to continue but we are stuck at 210MB. Is there something we can do to complete the replication so that all of the messages move from the old server to the new server?

    Read the article

  • What settings need to be changed to allow EC2 instances to use Amazon's Route 53 for DNS?

    - by ks78
    I have a number of Amazon EC2 instances, all running Ubuntu, which I'd like to configure to use Amazon's Route 53. I setup a script, following Shlomo Swidler's article, but ran into script-related issues, which were answered here. Now, I have the script working, but my instances are still not able to access Route 53's DNS. By this I mean, they are not able to resolve hostnames to IP addresses. My instances are currently configured with the DNS server IP address Amazon pushes out to them by default, does that need to be changed when using Route 53? I'm also IP-restricting my instances using the Security Groups. Could that be the problem? Is there a certain IP address or port I should open to allow communication with Route 53? It seems that DNS requests should be originating from my instances so the Security Groups shouldn't be an issue, but I've been wrong before. If anyone has any ideas, I'd really appreciate it.

    Read the article

  • Why CFOs Should Care About Big Data

    - by jmorourke
    The topic of “big data” clearly has reached a tipping point in 2012.  With plenty of coverage over the past few years in the IT press, we are now starting to see the topic of “big data” covered in mainstream business press, including a cover story in the October 2012 issue of the Harvard Business Review.  To help customers understand the challenges of managing “big data” as well as the opportunities that can be created by leveraging “big data”, Oracle has recently run and published the results of a customer survey, as well as white papers and articles on this topic.  Most recently, we commissioned a white paper titled “Mastering Big Data: CFO Strategies to Transform Insight into Opportunity”. The premise here is that “big data” is not just a topic that CIOs should pay attention to, but one that CFOs should understand and take advantage of as well.  Clearly, whoever masters the art and science of big data will be positioned for competitive advantage in their industries or markets.  That’s why smart CFOs are taking control of big data and business analytics projects, not just to uncover new ways to drive growth in a slowing global economy, but also to be a catalyst for change in the enterprise.  With an increasing number of CFOs now responsible for overseeing IT investments and providing strategic insight to the board, CFOs will be increasingly called upon to take a leadership role in assessing the value of “big data” initiatives, building on their traditional skills in reporting and helping managers analyze data to support decision making. Here’s a link to the white paper referenced above, which is posted on the Oracle C-Central/CFO web site, as well as some other resources that can help CFOs master the topic of “big data”: White Paper “Mastering Big Data:  CFO Strategies to Transform Insight into Opportunity CFO Market Watch article:  “Does Big Data Affect the CFO?” Oracle Survey Report:  “From Overload to Impact – An Industry Scorecard on Big Data Industry Challenges” Upcoming Big Data Webcast with Andrew McAfee Here’s a general link to Oracle C-Central/CFO in case you want to start there: www.oracle.com/c-central/cfo Feel free to contact me if you have any questions or need additional information:  [email protected]

    Read the article

  • hung up troubleshooting packet discards

    - by Chris Satola
    I realize my question is generic, but hopefully someone may have some guidance for me. My network consists of Cisco switches. I am seeing a significant amount (upwards of millions of packets per day) transmit drops between two switches. One being a 3750 and the other a 3560. The peak throughput of this link is only upper 400Mbps, so it shouldn't be a bandwidth issue. At this point, I am sort of clueless where to look or what tools I can use to determine what packets are dropping and why. I can setup a SPAN port on that link and wireshark it, but I don't know if that could tell me anything. Does anyone have any suggestions? Thanks in advance.

    Read the article

  • Google results show .info domain instead of .com

    - by user481913
    I am on shared hosting currently and i registered this account with a .info domain as the main domain.... say MyDomain.info . However, the site runs from MyDomain.com . This is a cpanel based shared hosting account. MyDomain.info has nothing hosted at all... i.e no content files... MyDomain.com is setup as an Add On Domain and run from /public_html/MyDomain under MyDomain.info The problem is that when i type MyDomain as the keyword for search in Google , it shows result(s)for Mydomain.info although this is not the intended site and has no content hosted on itself. I tried to solve the issue by issuing a 301 permanent redirect from MyDomain.info to MyDomain.com, however Google keeps on displaying results as mydomain.info as the main site even after 1 month of the redirect. I want google to index MyDomain.com as the main site and remove MyDomain.info from the results. Also is this harmful from the seo point of view? How can i improve the seo if it is?

    Read the article

  • Fixing Windows7 Bootmgr

    - by Ashfame
    I made my laptop Dell XPS 15z dual boot with Ubuntu last year, and something went wrong & BOOTMGR of my windows got fried. I couldn't fix it that time. And I kept using Ubuntu. I don't even remember whether I installed directly via a live usb or used wubi, sorry. I installed 11.10 at that point of time, but right now I am on 12.10. Today, I got to know about the Boot repair tool, so I was wondering with this tool may be I can figure out what's exactly wrong with my setup. This is my Boot info - http://paste.ubuntu.com/1343575/ If I select the Win7 entry on GRUB2, I get the error BOOTMGR is missing. Press Ctrl-Alt-Del. Now the thing is I have read numerous links on how this could be fixed, but I don't feel comfortable without knowing what I am doing. So unless I am sure what a certain tool would do, I would prefer fixing it by hand (manually editing files). So reading from my boot info file, can anyone explain it to me what's messed up wrong here and what could fix it? I certainly can't afford to have my ubuntu install unbootable right now, but looking into this issue is bothering me too much. Help appreciated! I have Win7 DVD & Ubuntu live USBs with me, I am just looking for a sure shot way of fixing Win7 without any harm to my existing Ubuntu install.

    Read the article

  • Linux kernel with grsec + Java / Apache Tomcat

    - by NoozNooz42
    I've got a Debian Linux 64 bit dedicated server. The kernel has the grsec patch applied. I'm mainly using this server to run Apache Tomcat (6.0.26, Java 6) and everything seems fine. The only issue, is that when I start Tomcat, I get a few of these: grsec: From xxx.xxx.xxx.xxx: Segmentation fault occurred at 00007fefe04e4000 in /home/t/jre1.6.0_20/bin/java[java:22403] uid/euid:1001/1001 gid/egid:1001/1001, parent /sbin/init[init:1] uid/euid:0/0 gid/egid:0/0 grsec: more alerts, logging disabled for 10 seconds Then no error logs anymore. Everything is fine. The kernel is: Linux 2.6.32.2-xxxx-grs-ipv4-64 #1 SMP Tue Dec 29 14:41:12 UTC 2009 x86_64 GNU/Linux And the webapp works fine. So there are segmentation fault when Tomcat starts, but everything seems to works fine. Is this concerning? Should I move to a non-grsec kernel?

    Read the article

  • Are the Ubuntu ISO images updated From release .ubuntu.com

    - by tijybba
    Just got idea from this(may not be related though) question however. Are the ISO images from the official site updated with updates in Core Ubuntu system , like Kernel Updates , desktop Environment Updates(unity), i mean Updates of BASE system including X-org, Office suite, Package Manager , Update manager or Gnome Base Modules, those released in update Branches like precise-Updates branch. The reason i am asking this is , if i download the ISO image of Ubuntu 12.04 Say after two or three months from release , i have to do an update of approximately 200~300 MB's size. So why are these ISO images not updated to recent updates, i am aware that all of the components are not updated at the same time , but let's say after One month from actual release ( Both LTS and normal releases), the updated components can be added to form a Updated ISO in regular intervals, which provides new users to use latest versions and features with improved stability and less bandwidth Consumption. I am not mentioning the idea of comparison to Rolling Release , or External PPA's added updates , and neither Netinstal but the ISO of updated Packages .This can be provided as optional download. Since my question is within the boundary of Official updates releases so stability could not be the reason. I guess there are custom packagers out there , but having an official option would be better. It helps in distributing Newest ISO OS which impresses a lot new users , since it makes availability of newer features and a faster system ofcourse. Another reason of asking this is here. Edit: Since almost all new (Desktop) users download the Default ISO's having one or few issue , which may have been corrected in following updates. But most of new laptop users i encountered gave up because of it , so should i suggest ,for laptop not listed on Certified H/W list , to try daily Builds , if needed.

    Read the article

  • IE8 complains about SSL name mistmatch

    - by Cerin
    When visiting an SSL protected website, IE8 complains about the certificate name not matching the website address, but gives no information about the certificate or what name it's looking for. Visiting the same site in IE9 (or IE9 in "IE8 mode"), Firefox, Chrome, and Safari shows no problems, and that the certificate matches the address. Certificate checkers indicate everything is installed and configured correctly. Does anyone know what might be causing this? Is this a known issue or bug in IE8? I've been Googling for similar issues, but due to the uncertainty as to what's actually going on, I'm not sure what to search for. My problem reads similar to this question. However, my server is running Apache2.

    Read the article

  • Live resize of a GPT partition on Linux

    - by cyberz
    On Linux I used to resize MBR partitions using fdisk, even on live filesystems, and then issue a resize2fs/pvresize/... (depending on fs type) to get the new space allocated. Lately I've been using Xen and GPT partitions, and I've noticed that unfortunately parted doesn't seem to allow on-the-fly resizing of a mounted partition, in fact it will complain: Error: Partition XXX is being used. You must unmount it before you modify it with Parted. I've tried both the resize command and even rm + mkpart combination, but they will both complain about the partition being mounted. How can I do that?

    Read the article

  • Subversion hooks no longer running

    - by Chris Lieb
    I don't know when this started happening, but, for some reason, none of my Subversion hooks are running anymore. I am running Subversion 1.6.9 on a Gentoo Linux machine, which has had its hooks work in the past. I am running Subversion through the svn_dav module for Apache2.2. I modified the hook scripts that I make use of to write into a file in the /tmp directory owned by apache:apache whenever they are executed, but after making a commit, there is nothing in the file that should be written to. The scripts are executable and owned by apache:apache, so I don't think that is the issue. Here is one of my test scripts (post-commit.sh) that isn't getting executed: #!/bin/sh /bin/echo post-commit >> /tmp/z_test exit 0 After running a commit, I expect both the pre-commit.sh and post-commit.sh hooks to be run, but neither of them appear to be writing into the desired file (/tmp/z_test). What's going on?

    Read the article

  • how can I get 32-bit program to run on 64-bit Ubuntu?

    - by Carol
    Sorry to be asking this, but I have read quite a few posts and articles a lot of places wrt the issue I am having, to no avail. I am trying to get a Second Life Viewer (Firestorm) to run, and just keep getting the '64-bit error message' it throws. I have installed every 32-lib I can find, still doesn't work. I think I am surely missing some setting somewhere, or running Firestorm from the wrong place, or something, but I have no idea what. FWIW, Firestorm loads but doesn't behave right in the 32-bit version, either. I have actually tried several linux distros, 32 and 64-bit. Mint 32-bit runs it straight off, and Mint 64-bit throws the '64-bit error'. openSUSE, any version, won't run it at all. Oh, and all the other SL viewers I have tried behave the same way. I am beginning to wonder if my set-up just doesn't like linux. Here is my system info: CPU: Intel(R) Core(TM) i5 CPU 750 @ 2.67GHz (2661 MHz) Memory: 4026 MB OS Version: Linux 3.2.0-29-generic-pae #46-Ubuntu SMP Fri Jul 27 17:25:43 UTC 2012 i686 Graphics Card Vendor: ATI Technologies Inc. Graphics Card: ATI Radeon HD 5700 Series I appreciate any help anyone can give me! Thanks so much! Carol :)

    Read the article

  • terminal tools and logs for debugging TCP issues

    - by kellogs
    I have a server which I am testing for functionality (not load, not stress) with tsung. 50 users / second, 100 total users. Judging from tsung (tsung is the testing framework) graphs, there TCP connections (red line) drops to 0 while the commenced user sessions (green line) does not. Server logs show nothing to be gripping onto, so I am speculating some kind of TCP issue. Should this be the case ? Where would I look further on the server, any logs / tools to be looking at ? Only SSH available, no GUI. > root@XMPP:~# cat /etc/lsb-release > DISTRIB_ID=Ubuntu > DISTRIB_RELEASE=11.10 > DISTRIB_CODENAME=oneiric > DISTRIB_DESCRIPTION="Ubuntu 11.10" Thank you

    Read the article

  • Ubuntu 12.04 installation on GPT + RAID going into grub rescue

    - by Proy
    I have two 2TB disks. I am installing Ubuntu 12.04 using the alternate version of the server cd. On the partitioning page I have done my partitioning as follows /dev/sda1 - 32 MB - bios_grub /dev/sda2 - 50 GB -raid device /dev/sda3 - 8 GB -raid device /dev/sda4 - Balance full GB - raid device /dev/sdb1 - 32 MB - bios_grub /dev/sdb2 - 50 GB -raid device /dev/sdb3 - 8 GB -raid device /dev/sdb4 - Balance full GB - raid device After this I have setup raid devices /dev/md0 for /(/dev/sda2 + /dev/sdb2) for / ext4 /dev/md1 for swap( /dev/sda3 + /dev/sdb3)for swap /dev/md2 for /home(/dev/sda4 + /dev/sdb4)for /home ext4 The installation finishes it shows that it is installing grub to /dev/sda and /dev/sdb. But once the system reboots it falls into grub rescue mode. on doing ls I can not see the md devices only hd once. I also tried booting into rescue mode with the install cd and doing grub-install /dev/sda and /dev/sdb. What am I doing wrong ? Why is grub2 not detecting the raid revices ? UPDATE: I just did the same steps with Ubuntu 10.04 and it worked perfectly fine. I wiped out the RAID and partitions and everything and did it from scratch. I think the issue is with Ubuntu 12.04 and the way it partitions 2 TB disks

    Read the article

  • Stakeholder Management in OUM

    - by user719921
    Where is Stakeholder Management in OUM?  Stakeholder Management typically falls into the purview of the Project Manager, which means much of the associated guidance is found in the OUM Manage Focus Area (a.k.a. Manage).  There is no process in Manage named Stakeholder Management, but this “touch point” can be found in a variety of other processes including Bid Transition (BT), Communication Management (CMM) and Organizational Change Management (OCHM). •         Stakeholder management starts in the Bid Transition process with Stakeholder Analysis •         This Stakeholder Analysis is used to build the Project Team Communication Plan in the Communication Management process. •         Stakeholder management should be executed during the Execution and Control phase.  For example, as issues are resolved, the project manager should take the action item to follow up with the affected stakeholders to ensure they are aware that the issue has been resolved. •       The broader topic of Stakeholder management is also addressed very thoroughly in the Organizational Change Management process in the Implement Focus Area, which is a touch point to the Organizational Change Management process in Manage. Check it out and let me know your thoughts!

    Read the article

  • File permission issues after setting up an amazon ec2 instance

    - by Pardoner
    I've set up an amazon ec2 instance and I'm have some file permission issues. I've created myself a new user and added myself to the following groups: adm:x:4:me,ubuntu www-data:x:33:me,www-data ssh:x:108:me admin:x:111:me ubuntu:x:1000:www-data,me me:x:1001:me but when I cd /var/www I can't do simple commands without doing sudo first. So I chmod -R www-data:www-data /var/www to ensure that I'm in the owning group but I still have to type sudo for everything. If I sudo su www-data it works fine. Since I'm in the www-data group shouldn't I have the same privilages as www-data? One strange thing I'm noticing is that when I ls -l it list the owner but not the group names. Could this possibly be part of the issue? Is is posible for a directory to not be part of a group? drwxr-xr-x 4 www-data 4.0K Oct 24 16:39 . drwxr-xr-x 14 root 4.0K Oct 10 16:58 .. drwxrwxr-x 9 www-data 4.0K Oct 23 04:03 admin.mywebsite.com drwxrwxr-x 2 www-data 4.0K Oct 4 00:29 mywebsite.com drwxrwxr-x 9 www-data 4.0K Oct 23 04:03 staging.mywebsite.com

    Read the article

  • Wrong java -version being reported

    - by Malachi
    I am running Windows 7 Professional x64 and have the following Java versions installed: x64 C:\Program Files\Java jdk1.6.0_24 jdk1.7.0_04 jdk1.7.0_07 jre6 jre7 x86 C:\Program Files (x86)\Java jre1.6.0_07 jre6 jre7 in my environment variables I have my PATH containing C:\Program Files\Java\jdk1.6.0_24\bin and JAVA_HOME set to C:\Program Files\Java\jdk1.6.0_24\bin However running java -version reports java version "1.7.0_07" Java(TM) SE Runtime Environment (build 1.7.0_07-b10) Java HotSpot(TM) 64-Bit Server VM (build 23.3-b01, mixed mode) How is this the case when there is no reference to this version of Java in my Environment variables. Any help on this issue would be great as I am trying to run Apache ANT using Java 1.6.

    Read the article

  • How does Outlook handle old recurring reminders?

    - by Zian Choy
    Context: Windows 7 Ultimate 32-bit edition Microsoft Office 2007 Outlook Steps to Reproduce: Make an event that recurs once a week. Wait a week. See that it pops up OK. Wait a month. Notice that it doesn't say that it is a month overdue. Expected Result: The usual note that the reminder is [x] weeks overdue. Actual Result: Something like "6 days overdue". Possible Excaberating Issue: I have many overdue reminders. For the ones that aren't time critical (and all other things being equal), I work by category and age. For example, I do health-related reminders when I'm doing health stuff; if I have 2 health-related reminders, I do the older one first. Big Question: How is Outlook supposed to handle this sort of overdue recurring reminder? Is there any way to get Outlook to act the way I expect it to?

    Read the article

  • How to build the mainline kernel source package?

    - by Maxime R.
    Ubuntu kernel PPA only provides linux-headers*.deb and linux-image*.deb packages. How can I build the corresponding linux-source*.deb package ? Context: I'm currently running Ubuntu 11.10 with the mainline kernel (3.2 rc6 now) to get a better support for my sandybridge IGP (Dell E6420 laptop with intel i5-2520M CPU). Appears, i'd like to install this touchpad driver, ALPS touchpads being badly supported (see previous link bug report), while waiting for upstream support in kernel version 3.3. Problem is, DKMS keeps complaining about not finding the full kernel source: Module build for the currently running kernel was skipped since the kernel source for this kernel does not seem to be installed. Appears I may not need the full source but I'd still like to try having it installed to see if it solve my problem. What I tried : Uncompressing the kernel.org source archive in /usr/src/. DKMS still complaining. Manually updating the kernel source package with uupdate and the mainline source package like explained here. Did not succeed. Manually building the linux-source package following @roadmr and @elmicha instructions. I eventually succeeded to build it but DKMS still complained about the missing source. At last I noticed an error I did not catch in the first place while reinstalling the kernel headers. Appears the .deb I got may have been corrupted, downloading it again did the trick :) Alas, while DKMS agreed to compile the module i ran into the following error which appears to have already been reported. This issue isn't yet solved but I won't try to because of the following: in the end I decided to test the precise kernel version 3.2-rc6 through the xorg-edgers ppa which appears to be correctly patched: it works. Nevertheless, it might still be of some interest to know how to build the mainline linux-source package as the Ubuntu Kernel Team doesn't provide it. Not to mention that I learned a lot in the process ^^

    Read the article

  • Protecting PHP packages on server

    - by Jack
    Hi, I am a php developer and have recently decided to make one of my Magento extensions commercial. I have downloaded and configured MageParts CEM Server and that is all working perfectly in regard to licencing and delivery of module packages. The only issue is that the directory that the packages are stored in could be accessed by anyone. I tried this in a .htaccess file, but now it is not working. <Files services.wsdl> allow from all </Files> deny from all Clients are receiving a 403 Forbidden response. Have I done something wrong in the .htaccess file or would there be a better way to secure the directory? Any help would be greatly appreciated.

    Read the article

  • low performance on HPC cluster (sge) when running multiple jobs

    - by Yotam
    O know this is a long-shot but I'm clueless here. I'm running several computer simulations on High Performance Computation cluster (HPC) of oracale grid engine (sge). A single job runs at a certain speed (roughly 80 steps per second) when I add jobs to the machine, at a certain treshhold, the speed is recuded by two. On one machine (I don't know the cpu kind) the treshold is 11 jobs for 16 cpu's. On another one with the same number and kind of cpu's , the treshold is 8. I thought at first that this is a memory issue but each job takes about 60MB - 100MB and I have 16GB of ram on each of those machine. Did any of you encountered such a problem? is there any way to analyz this? Thanks.

    Read the article

  • Loading main javascript on every page? Or breaking it up to relevant pages?

    - by Kyle
    I have a 700kb decompressed JS file which is loaded on every page. Before I had 12 javascript files on each page but to reduce http requests I compressed them all into 1 file. This file is ~130kb gzipped and is served over gzip. However on the local computer it is still unpacked and loaded on every page. Is this a performance issue? I've profiled the javascript with firebug profiler but did not see any issues. The problem/illusion I am facing is there are jquery libraries compressed in that file that are sometimes not used on the current page. For example jquery datatables is 200kb compressed and that is only loaded on 2 of my website pages. Another is jqplot and that is another 200kb. I now have 400kb of excess code that isn't executed on 80% of the pages. Should I leave everything in 1 file? Should I take out the jquery libraries and load only relevant JS on the current page?

    Read the article

< Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >