Search Results

Search found 16404 results on 657 pages for 'easy transfer'.

Page 97/657 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • How could I import Postgres data dumps into MS SQL?

    - by dean nolan
    I have some data that is from a Postgres database dump (not csv or anything) and I am looking to get it into MS SQL. Is there an easy way to do this or a free tool that doesn't have limits on data import size etc? The Postgres is on a Debian VM and I could export it to csv in there but I am new to Linux and don't know how I would transfer it from the VM to Win 7. Thanks

    Read the article

  • Hibernate Lazy initialization exception problem with Gilead in GWT 2.0 integration

    - by sylsau
    Hello, I use GWT 2.0 as UI layer on my project. On server side, I use Hibernate. For example, this is 2 domains entities that I have : public class User { private Collection<Role> roles; @ManyToMany(cascade = CascadeType.ALL, fetch = FetchType.LAZY, mappedBy = "users", targetEntity = Role.class) public Collection<Role> getRoles() { return roles; } ... } public class Role { private Collection<User> users; @ManyToMany(cascade = CascadeType.ALL, fetch = FetchType.LAZY, targetEntity = User.class) public Collection<User> getUsers() { return users; } ... } On my DAO layer, I use UserDAO that extends HibernateDAOSupport from Spring. UserDAO has getAll method to return all of Users. And on my DAO service, I use UserService that uses userDAO to getAll of Users. So, when I get all of Users from UsersService, Users entities returned are detached from Hibernate session. For that reason, I don't want to use getRoles() method on Users instance that I get from my service. What I want is just to transfer my list of Users thanks to a RPC Service to be able to use others informations of Users in client side with GWT. Thus, my main problem is to be able to convert PersistentBag in Users.roles in simple List to be able to transfer via RPC the Users. To do that, I have seen that Gilead Framework could be a solution. In order to use Gilead, I have changed my domains entities. Now, they extend net.sf.gilead.pojo.gwt.LightEntity and they respect JavaBean specification. On server, I expose my services via RPC thanks to GwtRpcSpring framework (http://code.google.com/p/gwtrpc-spring/). This framework has an advice that makes easier Gilead integration. My applicationContext contains the following configuration for Gilead : <bean id="gileadAdapterAdvisor" class="org.gwtrpcspring.gilead.GileadAdapterAdvice" /> <aop:config> <aop:aspect id="gileadAdapterAspect" ref="gileadAdapterAdvisor"> <aop:pointcut id="gileadPointcut" expression="execution(public * com.google.gwt.user.client.rpc.RemoteService.*(..))" /> <aop:around method="doBasicProfiling" pointcut-ref="gileadPointcut" /> </aop:aspect> </aop:config> <bean id="proxySerializer" class="net.sf.gilead.core.serialization.GwtProxySerialization" /> <bean id="proxyStore" class="net.sf.gilead.core.store.stateless.StatelessProxyStore"> <property name="proxySerializer" ref="proxySerializer" /> </bean> <bean id="persistenceUtil" class="net.sf.gilead.core.hibernate.HibernateUtil"> <property name="sessionFactory" ref="sessionFactory" /> </bean> <bean class="net.sf.gilead.core.PersistentBeanManager"> <property name="proxyStore" ref="proxyStore" /> <property name="persistenceUtil" ref="persistenceUtil" /> </bean> The code of the the method doBasicProfiling is the following : @Around("within(com.google.gwt.user.client.rpc.RemoteService..*)") public Object doBasicProfiling(ProceedingJoinPoint pjp) throws Throwable { if (log.isDebugEnabled()) { String className = pjp.getSignature().getDeclaringTypeName(); String methodName = className .substring(className.lastIndexOf(".") + 1) + "." + pjp.getSignature().getName(); log.debug("Wrapping call to " + methodName + " for PersistentBeanManager"); } GileadHelper.parseInputParameters(pjp.getArgs(), beanManager, RemoteServiceUtil.getThreadLocalSession()); Object retVal = pjp.proceed(); retVal = GileadHelper.parseReturnValue(retVal, beanManager); return retVal; } With that configuration, when I run my application and I use my RPC Service that gets all of Users, I obtain a lazy initialization exception from Hibernate from Users.roles. I am disappointed because I thought that Gilead would let me to serialize my domain entities even if these entities contained PersistentBag. It's not one of the goals of Gilead ? So, someone would know how to configure Gilead (with GwtRpcSpring or other solution) to be able to transfer domain entities without Lazy exception ? Thanks by advance for your help. Sylvain

    Read the article

  • Interop.Outlook.UserProperties.Add causing problem during connection time

    - by aanataliya
    Hi All, I have created a plug-in for outlook. Plug-in has only below code. private void OnNewOutlookInspector(Outlook.Inspector OutlookInsptr) { Outlook.MailItem MlItem = (Outlook.MailItem)OutlookInsptr.CurrentItem; //if I remove below line. Everything is working fine. MlItem.UserProperties.Add("INSPINIT", Outlook.OlUserPropertyType.olText , true , true ).Value = "1"; } public void OnConnection(object application, Extensibility.ext_ConnectMode connectMode, object addInInst, ref System.Array custom) { applicationObject = application; addInInstance = addInInst; MessageBox.Show("in connection new 2"); OutlkApp = (Outlook.Application)application; OutlkInsptrs = OutlkApp.Inspectors; OutlkInsptrs.NewInspector += new Outlook.InspectorsEvents_NewInspectorEventHandler(OnNewOutlookInspector); } Problem I am facing is, When I send HTML mail while plug-in is enabled, receiving end it is being received as a plain text. Below is the mail content along with the header and body at recieving end. x-sender: [email protected] x-receiver: [email protected] Received: from blr-s-07.pointcrossblr.com ([192.168.1.107]) by blr-ws-134.pointcrossblr.com with Microsoft SMTPSVC(6.0.2600.5949); Wed, 22 Dec 2010 17:11:02 +0530 Received: from blrws134 ([192.168.1.175]) by blr-s-07.pointcrossblr.com with Microsoft SMTPSVC(6.0.3790.4675); Wed, 22 Dec 2010 17:11:02 +0530 From: "Ashif Nataliya" <[email protected]> To: <[email protected]> Cc: <[email protected]> Subject: RTF FRM blr to pc.com cc blr-ws-134 Date: Wed, 22 Dec 2010 17:11:02 +0530 Message-ID: <[email protected]> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_NextPart_000_00F7_01CBA1FB.36115580" X-Mailer: Microsoft Outlook 14.0 Content-Language: en-us X-MS-TNEF-Correlator: 00000000DCB2344DE8F50F4FBC91085BB5C06D55A4172000 thread-index: AcuhzRuTOBkvHPUnS1aLi9+cHNAWhA== Return-Path: [email protected] X-OriginalArrivalTime: 22 Dec 2010 11:41:02.0822 (UTC) FILETIME=[1C788860:01CBA1CD] This is a multipart message in MIME format. ------=_NextPart_000_00F7_01CBA1FB.36115580 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit HTML Test Test Mail ------=_NextPart_000_00F7_01CBA1FB.36115580 Content-Type: application/ms-tnef; name="winmail.dat" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="winmail.dat" // and some other code..... Any help is appreciated. Thanks.

    Read the article

  • List all files and dirs without recursion with junctions

    - by naxa
    Is there a native|portable tool that can give me an unicode (or at least system local-compatible) list of all files and directories under a path recursively, without recursing into junction points or links, in Windows? For example, the built-in dir command, as well as takeown and icacls run into an infinite loop with the Application Data directory (1). EDIT I would like to be able to get a text file or at least easy clipboard transfer as output.

    Read the article

  • Going from small to medium sized websites.

    - by Landitus
    I've been coding websites for a couple of years now, mostly in php and xhtml. I come from the design world, but I'm proud of doing standart compliant websites and great interfaces. Also used Wordpress and loved it. Most of the time there were really simple commercial websites, with no database included, where everything is done from scratch. Every page is parsed through an index?page=xxx and But I have a few prospects that are larger websites (let's call them 'medium sized websites') where I feel I'm lacking the following: How to dispach or render the pages (MVC controller instead of index?page=???) Proper page hierarchy and easy breadcrumbs implementation Auto generation of navigation menu, or an easy way to maintain them? Clean URLs Form validation Easy database support I really don't know if I should be looking into php scripts, and refine my skills or get into a CMS (like drupal) or a PHP framework. I found Wordpress very assuring and didn't feel trapped into crazy conventions, but I feel is not the right tool for this. I hate the CMS Page with the big textbox as I am used to code every page by hand my pages are not a title and a textbox. Got the feeling? My php skills are sort of medium/low still, but I would like to hear some thoughts of what I should learn to take the next step!

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • How to copy a 200GB file faster?

    - by RainDoctor
    I got a 200GB .tgz file on server A(RHEL 5.2). I wanna transfer that file to server B (RHEL 5.3). Server B is on ESXi 4 Update1. I gave 10GB to that Server B VM, with 4 vCPUs. Both Server A and Server B are connected with an ethernet cable with local IP addies (no switch involved) scp gives me about 3Mbps. Is there a way to get 400Mbps?

    Read the article

  • I disconnected my cellphone while transferring files to its Mini SD card. Now the files aren't there

    - by Martín Fixman
    I use Ubuntu 9.10, and the MiniSD card shows as having the space used as if there were files. Baobab (the disk usage analyzer) shows that the card only has 118 MB used (of the 401 Ubuntu claims there are). Of course, I already tried the obvious (rebooting the phone, adding and removing files, etc.), but I don't want to format my card, because I still have some files on it, the transfer to my computer is slow, and because I use an old wire it fails often.

    Read the article

  • Unlimited online backup space for fixed price using rsync/FTP/other simple protocol

    - by barrycarter
    Many companies offer unlimited online backup space for a fixed price (mozy.com, twitter.com/allmydata, onlinestoragesolution.com, etc), but they either use proprietary non-Linux-friendly software and/or have gone out of business and/or don't actually work. Who offers reliable unlimited online backup space for a fixed price that's compatible with rsync, FTP, or other generic/open source file transfer protocols? Or, has anyone written software that lets me treat Mozy's/etc space as though it were regular file space (eg, "mozyfs"?)

    Read the article

  • Why is curl in Ruby slower than command-line curl?

    - by Stiivi
    I am trying to download more than 1m pages (URLs ending by a sequence ID). I have implemented kind of multi-purpose download manager with configurable number of download threads and one processing thread. The downloader downloads files in batches: curl = Curl::Easy.new batch_urls.each { |url_info| curl.url = url_info[:url] curl.perform file = File.new(url_info[:file], "wb") file << curl.body_str file.close # ... some other stuff } I have tried to download 8000 pages sample. When using the code above, I get 1000 in 2 minutes. When I write all URLs into a file and do in shell: cat list | xargs curl I gen all 8000 pages in two minutes. Thing is, I need it to have it in ruby code, because there is other monitoring and processing code. I have tried: Curl::Multi - it is somehow faster, but misses 50-90% of files (does not download them and gives no reason/code) multiple threads with Curl::Easy - around the same speed as single threaded Why is reused Curl::Easy slower than subsequent command line curl calls and how can I make it faster? Or what I am doing wrong? I would prefer to fix my download manager code than to make downloading for this case in a different way. Before this, I was calling command-line wget which I provided with a file with list of URLs. Howerver, not all errors were handled, also it was not possible to specify output file for each URL separately when using URL list. Now it seems to me that the best way would be to use multiple threads with system call to 'curl' command. But why when I can use directly Curl in Ruby? Code for the download manager is here, if it might help: Download Manager (I have played with timeouts, from not-setting it to various values, it did not seem help) Any hints appreciated.

    Read the article

  • Pass HAProxy healthcheck requests as User-agent "LB-Check" to the backend webservers(apache)

    - by Joseph
    I have a HAProxy setup in front of webservers(apache) for loadbalancing. Also healthchecks for these webservers are also configured in HAProxy. option httpchk HEAD /healthcheck.txt HTTP/1.0 Is it possible to transfer these healthcheck requests to backend webservers as "LB-Check" User-agent or any other option, so that I can distinguish it from other log entries? However I dont want to go for "dontlog" option, as I dont want to miss these entries.

    Read the article

  • Migrate data from one server to another using rsync

    - by Leonid Shevtsov
    I'm moving from one VPS to another, and I figured that the simplest way to transfer data would be rsync. However, the data is owned by a user, www-data, which doesn't have ssh privileges, and I'd like it to be owned by the same (named) user on the target machine. Obviously I need all file permissions preserved. I have SSH access via another user with sudo privileges on both machines. Is this possible to do this with rsync?

    Read the article

  • Rsync: remote source and destination

    - by goncalopp
    If both source and destination are remote, rsync complains: The source and destination cannot both be remote. rsync error: syntax or usage error (code 1) at main.c(1156) [Receiver=3.0.7] Is there a insurmountable technical obstacle to making rsync do this? Or it's simply a case of it's-not-yet-implemented? It seems relatively easy to create a local buffer in memory that mediates the transfer between two remotes, holding both hashes and data. Conversely, is there other (unix) software that implements this functionality?

    Read the article

  • A Digg-like rotating homepage of popular content, how to include date as a factor?

    - by Ferdy
    I am building an advanced image sharing web application. As you may expect, users can upload images and others can comments on it, vote on it, and favorite it. These events will determine the popularity of the image, which I capture in a "karma" field. Now I want to create a Digg-like homepage system, showing the most popular images. It's easy, since I already have the weighted Karma score. I just sort on that descendingly to show the 20 most valued images. The part that is missing is time. I do not want extremely popular images to always be on the homepage. I guess an easy solution is to restrict the result set to the last 24 hours. However, I'm also thinking that in order to keep the image rotation occur throughout the day, time can be some kind of variable where its offset has an influence on the image's sorting. Specific questions: Would you recommend the easy scenario (just sort for best images within 24 hours) or the more sophisticated one (use datetime offset as part of the sorting)? If you advise the latter, any help on the mathematical solution to this? Would it be best to run a scheduled service to mark images for the homepage, or would you advise a direct query (I'm using MySQL) As an extra note, the homepage should support paging and on a quiet day should include entries of days before in order to make sure it is always "filled" I'm not asking the community to build this algorithm, just looking for some advise :)

    Read the article

  • Assign a drive letter to a Solaris disk in a Windows box

    - by Cat
    I need some way to map a UFS Solaris drive (ie, assign a drive letter to it) while it is in a Windows XP box. I've found utilities that will let me transfer files from a Solaris disk to a NTFS disk on the Windows box, but nothing that will let me map/share that Solaris disk. And no, putting the Solaris disk in a Solaris box and using something like Samba to share the disk is unfortunately not an option. Cat

    Read the article

  • Managing a wireless internet connection

    - by cornjuliox
    I've just recently purchased a USB wi-fi adapter and have been using the software (for Windows XP) that comes with it to search for and connect to networks, but it's really quite slow and featureless. Are there free/OSS alternatives for Windows XP that can I can replace it with? Preferably something that can, in addition to connecting/searching for wireless networks, display stats like signal strength and transfer speeds on graphs so I can better monitor the quality of my connection?

    Read the article

  • Ninety-Fifth Percentile Calculation for Bandwidth

    - by Kyle Brandt
    I am trying to calculate the bandwidth of my current internet connection. I am pulling the current input and output transfer rate via snmp. If the argument to the following function is a sorted ascending list of the the some of each input and output sample, is this the right way to calculate 95th percentile? sub ninetyFifth { #Expects Sorted Data my $ninetyFifthLine = (@_ * .95) - 1; return $_[$ninetyFifthLine]; }

    Read the article

  • Windows XP: saving large files on network share stalls

    - by mklhmnn
    When I transfer larger files (a few hundred MB) on a network share (either Buffalo LinkStation or other Windows machine) from my Windows XP Pro SP3, it always stalls. Smaller files are no problem, reading from a network share is also no problem. I already had this problem on my notebook and now on my desktop machine, so I assume that it most likely is no driver problem. Does anybody have a clue what could be the problem — or better: the solution?

    Read the article

  • My computer is not reading my PNY SD 1GB memory card

    - by Jessica
    I use a Kodak EasyShare C160 digital camera, a PNY SD 1GB memory card, and a Dell Latitude E5500 computer. I have had my camera for over a year and have always been able to transfer my pictures to my computer. Now my computer does not recognize my memory card and I get a message from the EasyShare software that says "Cannot get device information", although my computer does recognize the pictures stored on my camera's internal memory. Is there any way to access the pictures on my memory card, or are they lost forever?

    Read the article

  • What technologies allow bidirectional streaming of video?

    - by Roman
    Wikipedia says that Flash allows "bidirectional streaming of audio and video". Is it possible to do that with other technologies (for example with JavaScript)? In other words, I want to transfer video from one user of web-site to another one in real time. I want to have something that is already installed by many users or easy to install (Flash fulfills this requirements). And I want to have something free.

    Read the article

  • How can Rackspace beat DigitalOcean's Pricepoint? [on hold]

    - by Matt Jensen
    I have recently discovered Digital Ocean and I have found it to be a relatively nice experience for small staging servers, and the thought occurred to me, why am I paying $267~ for a server on Rackspace (40GB RAM, 160 GB Drive, 2 vCPUs, 400 Mb/s) when Digital Ocean offers a server for $40 (40 GB RAM, 60GB Drive [storage is not a concern of mine], 2 vCPUs, ?Mb/s)? Does Rackspace offer some kind of obvious advantages in Transfer speed/bandwidth? My applications are small startups that for the immediate future will only have about 200-300 concurrent users at once.

    Read the article

  • List Squid's internal ip:port to external ip:port mapping table

    - by joshperry
    I'm assuming that squid keeps a list of internal ip:port that a request is made on and the matching external ip:port that the request is fulfilled with. In the case of a long transfer, such as a file download, it would be nice to be able to see which internal ip:port is downloading the file. I am able to see the traffic and get the external ip:port that squid is using easily with tcpdump or iptraf but I can't find a way to map this back to an internal ip:port.

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >