Search Results

Search found 36111 results on 1445 pages for 'mysql update'.

Page 741/1445 | < Previous Page | 737 738 739 740 741 742 743 744 745 746 747 748  | Next Page >

  • Which guide do you recommend on setting up Nginx

    - by Saif Bechan
    I am setting up an LEMP (Nginx, MySQL, PHP on Linux) from scratch. There are a lot of guides available online in all different forms. Now I want a setup with virtual hosts, and only serve dynamic content (PHP). My static files(images,css,js) are on a CDN. Do you know of a good guide on setting up the LEMP installation.

    Read the article

  • Failed Software RAID0 on Linux - Attempting to recover data

    - by Gizmo_the_Great
    I have a two disk RAID0 software raid (not hardware raid) that is reported to have failed during boot and my OS won't start. Using a Live CD, I get the following output : sudo mdadm -E /dev/sdc1 /dev/sdd1 /dev/sdc1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 3710713d:fb301031:84b61247:d1d53e0f Name : HP-xw9300:0 Creation Time : Sun Sep 1 15:22:26 2013 Raid Level : -unknown- Raid Devices : 0 Avail Dev Size : 1465145328 (698.64 GiB 750.15 GB) Data Offset : 16 sectors Super Offset : 8 sectors State : active Device UUID : ad427cd2:9f885f57:7f41015f:90f8f6af Update Time : Sun Jun 8 12:35:11 2014 Checksum : a37407ff - correct Events : 1 Device Role : spare Array State : ('A' == active, '.' == missing) /dev/sdd1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 3710713d:fb301031:84b61247:d1d53e0f Name : HP-xw9300:0 Creation Time : Sun Sep 1 15:22:26 2013 Raid Level : -unknown- Raid Devices : 0 Avail Dev Size : 976771056 (465.76 GiB 500.11 GB) Data Offset : 16 sectors Super Offset : 8 sectors State : active Device UUID : 2ea0199d:cb08d9e7:0830448a:a1e1e348 Update Time : Sun Jun 8 13:06:19 2014 Checksum : 8883c492 - correct Events : 1 Device Role : spare Array State : ('A' == active, '.' == missing) GParted lists both disks, detects the flags as 'Raid' and lists the data usage. Can anyone please help me re-assemble just so that I can copy some of the data off that I have not backed up recently? Thanks

    Read the article

  • How to register rss for a website?

    - by domainking
    I am not sure if I ask this question in the right place, because I am new to it. What I want to ask is, do I need to register/create RSS for my website? I have a website, lets say: [http://blog.domain.com] = its a 2.9.2 wordpress blog So, if I want to display the latest content in another subdomain, for example: [news.domain.com], how do I do that? I know a little bit of php and mysql.

    Read the article

  • solution for an offline server

    - by dashmug
    I'm trying to setup a development server at work that will ideally be able to test drive a couple of projects in PHP, Rails, or Django (not always running at the same time). I develop the apps locally on a Mac and then I'll put the projects up on this server for testing with my actual users (non-techies) before deploying to a production server. My problem is that we have a very poor internet connection (almost negligible) at work and doing the usual apt-get/yum/ports (make, clean, install) processes for setting up servers always get their packages from online repositories somewhere. I know I could probably download the source and then compile them myself but that's going to be too much of a hassle for me. I'm thinking about two solutions: Plan A: Run a server VM on my Mac and then use this VM as the source repository for the offline server. I've read about Ubuntu's apt-proxy and it seems to be good enough though I haven't tried it yet. I'm not sure if this is possible but can I simply do apt-get install nginx --downloadonly so that the package and its dependencies will be downloaded into my VM and my server can use the VM as the source repo for apt-get? Plan B: Run a server VM on my Mac (which I can setup/update easily when I'm home) and then clone the VM to the offline development server. Maybe I should simply make the server a VM host so I can simply copy the VM over. I think this is okay for the first-time setup but subsequent updates will take too long (cloning the VM image). If I was working on Windows, I imagine it'd be easier because most services have an installer file that I can download and then run at the server. If you could suggest another way, it would be much appreciated. Update: From Michael Hampton's answer, I found a possible solution which is apt-cacher. I also found this page on Ubuntu's website. I wonder if there is a better tool than this one.

    Read the article

  • How do I prevent lighttpd from caching static files, even when modified on disk?

    - by Pixelastic
    I am using lighttpd to serve static files. I have a bunch of images in a dir that I regularly update. This will change the file content (and filesize) as well as the modification date, but not their filename. When I access the files through http, the updates are not taken into account and lighty serves the old file. I can manually rename the file to something different, then lighttpd will return a 404 error, and if I rename my file back, I will get the correct updated version. Seems like lightty is using some kind of cache mechanism of its own (which is fine) to return static files. Unfortunatly, it seems that this mechanism doesn't update itself when files are modified. I checked through Wireshark, and my browser is really doing a request to the file, this is not a browser caching issue. It returns a 200 OK when requesting it from an empty cache, and a 304 Not Modified otherwise, as expected. But the file is returned with a wrong Last-Modified header that do not reflect the real last modification date. Maybe there is some config directive that I am not aware of ? I would like the files returned by lighty to reflect the changes made on disk directly, or at least being able to invalidate its cache.

    Read the article

  • Drop caches in Linux

    - by BerserkEVA
    I have an embedded Geode-based application server with 512MB RAM and I'm about to try to maximize free memory during applications workload, which uses MySQL database 5.5 with InnoDB quite aggressively. As part of the whole optimization I'd want to introduce echo 1 > /proc/sys/vm/drop_caches sleep 5 echo 0 > /proc/sys/vm/drop_caches in crontab. How often is it safe to execute something like this? Any other observation/suggestion is welcome.

    Read the article

  • SVN very slow over HTTP (seems auth related)

    - by Sydius
    I'm using SVN version 1.6.6 (r40053) via the command-line in Ubuntu 10.04 and connecting to a remote repository over HTTP that is in the local network. For a while, it worked fine, but has recently become very slow for any operation that requires communication with the repository, however it does eventually work after several minutes (~3m for svn up). Looking at Wireshark, it appears to be taking a full minute between the HTTP auth denied and the subsequent request containing credentials. The issue is local to my machine because other coworkers running Ubuntu are not having the issue and I've tried using my credentials from another machine and it was very fast. I tried deleting the .subversion folder in my home directory and checking everything out fresh, but it didn't help. Update: I think it's auth related. When I check out SVN repositories off of the Internet over HTTP (from Google Code, for example), everything is very fast until I do something that requires a password. Before prompting for the password for the first time, it stalls for at least a minute. Update 2: I set the neon-debug-mask in the SVN settings (in /etc/subversion/servers under [Global]) to 138 and it seems to spending a lot of time on 'auth: Trying Basic challenge...'

    Read the article

  • Other processes take over port 80 when restarting Apache - why, and how to solve?

    - by user72149
    I have a CentOS 5.5 server running Apache on port 80 as well as some other applications. All works fine until I for some reason need to restart the httpd process. Doing so returns: sudo /etc/init.d/httpd restart Stopping httpd: [ OK ] Starting httpd: (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs First I thought perhaps httpd had frozen and was still running, but that was not the case. So I ran netstat to find out what was using port 80: sudo netstat -tlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:7203 *:* LISTEN 24012/java tcp 0 0 localhost.localdomain:smux *:* LISTEN 3547/snmpd tcp 0 0 *:mysql *:* LISTEN 21966/mysqld tcp 0 0 *:ssh *:* LISTEN 3562/sshd tcp 0 0 *:http *:* LISTEN 3780/python26 Turns out that my python process had taken over listening to http in the instant that httpd was restarting. So, I killed python and tried starting httpd again - but ran into the same error. Netstat again: sudo netstat -tlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:7203 *:* LISTEN 24012/java tcp 0 0 localhost.localdomain:smux *:* LISTEN 3547/snmpd tcp 0 0 *:mysql *:* LISTEN 21966/mysqld tcp 0 0 *:ssh *:* LISTEN 3562/sshd tcp 0 0 *:http *:* LISTEN 24012/java Now my java process had taken over listening to http. I killed that too and could then successfully restart httpd. But this is a terrible workaround. Why will these python and java processes start listening to port 80 as soon as httpd is restarted? How to solve? Two other comments. 1) Both java and python processes are started by apache from a php script. But when apache is restarted, they should not be affected. And 2) I have the same setup on two other machines running Ubuntu and there's no problem there. Any ideas? Edit: The Java process listens to port 7203 and the python process supposedly doesn't listen to any port. For some reason, they start listening to port 80 when apache is restarted. This hasn't happened before. On Ubuntu it runs fine. For some reason, on my current CentOS 5.5 machine, this problem arises.

    Read the article

  • How can I control WiFi clients that, authenticated with radius server (FreeRADIUS) and Active directory [on hold]

    - by Debian
    In order to Authenticate WiFi clients, I have FreeRadius server that works with Active Directory. My question is that, Now all users in Active Directory can connect to WLAN. How can I control them with FreeRradius server. I mean now all people can connect to network and I can not control them. Honestly I don't know how can I control them. FreeRadius was installed on CentOS 6.5 and I don't have mysql. Thanks,

    Read the article

  • Apache + Passenger, requests not forwarded to Rails

    - by olalonde
    I'm trying to set up Apache + Passenger (mod_rails) and everything works fine except Apache won't forward requests to Rails. From my Apache log: File does not exist: /var/www/rails_apps/domain.com/public/controllerName My Rails app works fine with Mongrel and I also have some PHP/MySQL sites running fine. I have no .htaccess in my Rails app directory. What is going wrong? How could I troubleshoot this problem ?

    Read the article

  • Performance of external USB disk with ESXi5

    - by PeterMmm
    I have a new HP DL120 G7 server with ESXi5. One VM is a Win2003 instalation and I have an external USB2.0 drive attached by USB Controller and USB Device. I copy a 4GB file from external USB to server disk. In the VM that takes up to 10 minutes. On a native Win2003 that takes aprox. 3 minutes. I have no explaination for that diference: In any case the bottleneck is the USB connection, much slower than the disks (SAS, RAID1). If the USB connection on the VM would be USB1.1 and not USB2.0 it would take much more time. (The disk performance between server partitions on the VM is correct. - see update) Could be that my native box is extremely fast and the VM is the normal case. ??? Update I try with passtrough and a first run copy the same data in aprox. 7 minutes. Still 2 times slower than the native connection. I also did another messure and the copy between partitions on the same VM takes 3 minutes.

    Read the article

  • imagecreatefromjpeg() stops working after server upgrade

    - by John Conde
    We have a server located at a local company's place of business running Solaris/Apache/PHP. They recently did an update to Solaris, Apache, and PHP (security update patches, etc.). Unfortunately it has caused the image manipulation portion of our software to break. imagecreatefromjpeg() is now generating the following error: Warning: imagecreatefromjpeg() [function.imagecreatefromjpeg]: '/path/to/file/filename.jpg' is not a valid JPEG file in /path/to/file/Image.class.php on line XX No PHP code was changed during the server upgrade and it was fully functional before the software upgrades. I checked the files being passed to imagecreatefromjpeg() and they are indeed valid (they open successfully in both image editing software and in my browser). I checked the permissions of the directory from which the files are being opened and they do have read permission. GD library is enabled. I'm not sure what else I can check. Based on the scenario above I am guessing something changed in the software but I don't know what it could be. PHP was version 5.2.5 and is now 5.2.13. I appreciate any guidance as to what could be cause of this issue.

    Read the article

  • How benchmark server with load balancer

    - by Fajkowsky
    Hey I have four computers(with linux): two with mediawiki(mirror, both connected to one db) one with mysql one server(DHCP,DNS etc) I configured on my server load balancer and now hen I type in browser name.local for example I get one of my mediawiki servers. I press f5 really fast and then I see in top command both computers are being loaded but not much. I used tool ab (apache benchamrk) but if I run it always is connected to one server never alternately. I use this settings: ab -n 100 -c 10 http://name.local/

    Read the article

  • Request data from database using Django

    - by user21901
    I have somehow two variables for example x and y. I have also made a model with 3 fields (longitude,latitude,name) and have it activated in mysql database. I need to send these two variables(x,y) to the django server so as to search if there is an object with longitude=x and latitude=y.If there is one i want to get back it's name. How can i do this?

    Read the article

  • How do I change the default ftp folder in Mac OS X 10.6?

    - by Wild_Eep
    I'm running WordPress 2.9.1 from a Mac running 10.6.3. WordPress is installed to the /Library/WebServer/Documents folder. WordPress has a feature called Auto Update. Clicking an auto update button will download and install updated versions of the WordPress software, or third-party plugin tools. It's a convenient way to keep things up to date. WordPress uses FTP to download the files. I've enabled FTP and set up a user account and opened the requisite ports in my firewall for FTP traffic. This doesn't seem to be enough for my self-hosted installation, though. I'm sure this feature was originally designed for someone who has access to a remote shared webserver, and that it's merely a configuration challenge related to the FTP setup. I feel that if I can adjust the initial directory that the FTP service presents to the AutoUpdate feature, everything else will work properly. So, my question is, how do I adjust what folder is presented when a given user connects to a Mac running 10.6.3 via FTP?

    Read the article

  • VMWare server: virtual machine start-up reaches 95% and hangs

    - by Magsol
    This problem cropped up today, after updating my eVGA motherboard's chipset in order to try and fix an unrelated issue. After installing the chipset update (contained SATA and Ethernet drivers), every time I've tried to start my Ubuntu VM, it reaches 95% in the web interface and then just hangs. I'm using VMWare Server 2.0.2, running it within Windows 7 64-bit. I haven't had any issues up until now, and I suspect it has something to do with the chipset update. I've already tried reinstalling VMWare itself, removing the VM from the inventory and re-adding it, and neither has proved successful. I'm also not sure how to kill the VMware server process itself once the start-up hangs; I've only been able to try again by rebooting (as none of the VMWare Services listed kill the server process itself). Any insights? Edit: Uhhh...as an addendum: I have a cron job set up on my Ubuntu VM that runs every 20 minutes. The VM is still listed in the VMWare web interface as at 95% of startup, and the start/stop buttons are still disabled, but the cronjob just ran. I also tested SSH, and I was able to tunnel into the Ubuntu VM as well. Now I'm really confused. Edit #2: I just started a thread on the VMWare Server support forums on this same topic. Hopefully between the two communities, we can come up with an answer: http://communities.vmware.com/thread/251033 Edit #3: In lieu of a specific fix, I've switched over to VirtualBox, and all is working just fine.

    Read the article

  • VM automatic provisioning advice

    - by jdgregson
    In my lab we have 24 workstations, each with five technician-maintained virtual machines set up in VMware Workstation. These provide a lot of management overhead, as we have to update them as well as the host operating systems every three months (the start of the next quarter), which adds up to 144 systems to update instead of just 24. Whenever we need to reimage the hosts, the VMs add another 130GB to each image, which is over 3TB of extra data to send over the network, and a lot more time to apply each image, and then we still have to boot all 120 VMs and assign them a unique IP Address and host names. We would like to get the VMs off the hosts and onto a server, but after looking around for a few days, I still don't know where to begin looking for a solution. There may be a better way to do this, but in my mind, the ideal solution would be to replace the VMs on the host machines with five Thin Client operating systems, each configured to connect to a server and be sent or connected to a unique virtual machine. We can't have 120 VMs running on the server all the time, so the server would have to create a copy of the VM from a template whenever a student tries to boot one, and destroy the VM after the student is finished with it. If there is another client application that has to be installed on the hosts that would be fine, the only reason I'd like to keep them in VMware Workstation is because students already know to look there for the VMs when they need to use them. What, if any, virtualization software will allow this? Is there some other solution I'm not seeing?

    Read the article

  • The amount of memory used by each process

    - by tuxsmouf
    I have a mysql server running debian with 2GO of RAM. I would like to know the amount of memory used by each process. I thought ps -aux was the command and options for it. But I only see 90 MO used by several processes and free -m tells me that 1400 MO are used. Is there a way to have a better view with the processes and the memory used by them ?

    Read the article

  • spring security : Failed to load ApplicationContext with pre-post-annotations="enabled"

    - by thogau
    I am using spring 3.0.1 + spring-security 3.0.2 and I am trying to use features like @PreAuthorize and @PostFilter annotations. When running in units tests using @RunWith(SpringJUnit4ClassRunner.class) or in a main(String[] args) method my application context fails to start if enable pre-post-annotations and use org.springframework.security.acls.AclPermissionEvaluator : <!-- Enable method level security--> <security:global-method-security pre-post-annotations="enabled"> <security:expression-handler ref="expressionHandler"/> </security:global-method-security> <bean id="expressionHandler" class="org.springframework.security.access.expression.method.DefaultMethodSecurityExpressionHandler"> <property name="permissionEvaluator" ref="aclPermissionEvaluator"/> </bean> <bean id="aclPermissionEvaluator" class="org.springframework.security.acls.AclPermissionEvaluator"> <constructor-arg ref="aclService"/> </bean> <!-- Enable stereotype support --> <context:annotation-config /> <context:component-scan base-package="com.rreps.core" /> <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="locations"> <list> <value>classpath:applicationContext.properties</value> </list> </property> </bean> <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"> <property name="driverClass" value="${jdbc.driver}" /> <property name="jdbcUrl" value="${jdbc.url}" /> <property name="user" value="${jdbc.username}" /> <property name="password" value="${jdbc.password}" /> <property name="initialPoolSize" value="10" /> <property name="minPoolSize" value="5" /> <property name="maxPoolSize" value="25" /> <property name="acquireRetryAttempts" value="10" /> <property name="acquireIncrement" value="5" /> <property name="idleConnectionTestPeriod" value="3600" /> <property name="maxIdleTime" value="10800" /> <property name="maxConnectionAge" value="14400" /> <property name="preferredTestQuery" value="SELECT 1;" /> <property name="testConnectionOnCheckin" value="false" /> </bean> <bean id="auditedSessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="configLocation" value="classpath:hibernate.cfg.xml" /> <property name="hibernateProperties"> <value> hibernate.dialect=${hibernate.dialect} hibernate.query.substitutions=true 'Y', false 'N' hibernate.cache.use_second_level_cache=true hibernate.cache.provider_class=net.sf.ehcache.hibernate.SingletonEhCacheProvider hibernate.hbm2ddl.auto=update hibernate.c3p0.acquire_increment=5 hibernate.c3p0.idle_test_period=3600 hibernate.c3p0.timeout=10800 hibernate.c3p0.max_size=25 hibernate.c3p0.min_size=1 hibernate.show_sql=false hibernate.validator.autoregister_listeners=false </value> </property> <!-- validation is performed by "hand" (see http://opensource.atlassian.com/projects/hibernate/browse/HV-281) <property name="eventListeners"> <map> <entry key="pre-insert" value-ref="beanValidationEventListener" /> <entry key="pre-update" value-ref="beanValidationEventListener" /> </map> </property> --> <property name="entityInterceptor"> <bean class="com.rreps.core.dao.hibernate.interceptor.TrackingInterceptor" /> </property> </bean> <bean id="simpleSessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="configLocation" value="classpath:hibernate.cfg.xml" /> <property name="hibernateProperties"> <value> hibernate.dialect=${hibernate.dialect} hibernate.query.substitutions=true 'Y', false 'N' hibernate.cache.use_second_level_cache=true hibernate.cache.provider_class=net.sf.ehcache.hibernate.SingletonEhCacheProvider hibernate.hbm2ddl.auto=update hibernate.c3p0.acquire_increment=5 hibernate.c3p0.idle_test_period=3600 hibernate.c3p0.timeout=10800 hibernate.c3p0.max_size=25 hibernate.c3p0.min_size=1 hibernate.show_sql=false hibernate.validator.autoregister_listeners=false </value> </property> <!-- property name="eventListeners"> <map> <entry key="pre-insert" value-ref="beanValidationEventListener" /> <entry key="pre-update" value-ref="beanValidationEventListener" /> </map> </property--> </bean> <bean id="sequenceSessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="configLocation" value="classpath:hibernate.cfg.xml" /> <property name="hibernateProperties"> <value> hibernate.dialect=${hibernate.dialect} hibernate.query.substitutions=true 'Y', false 'N' hibernate.cache.use_second_level_cache=true hibernate.cache.provider_class=net.sf.ehcache.hibernate.SingletonEhCacheProvider hibernate.hbm2ddl.auto=update hibernate.c3p0.acquire_increment=5 hibernate.c3p0.idle_test_period=3600 hibernate.c3p0.timeout=10800 hibernate.c3p0.max_size=25 hibernate.c3p0.min_size=1 hibernate.show_sql=false hibernate.validator.autoregister_listeners=false </value> </property> </bean> <bean id="validationFactory" class="javax.validation.Validation" factory-method="buildDefaultValidatorFactory" /> <!-- bean id="beanValidationEventListener" class="org.hibernate.cfg.beanvalidation.BeanValidationEventListener"> <constructor-arg index="0" ref="validationFactory" /> <constructor-arg index="1"> <props/> </constructor-arg> </bean--> <!-- Enable @Transactional support --> <tx:annotation-driven transaction-manager="transactionManager"/> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="auditedSessionFactory" /> </bean> <security:authentication-manager alias="authenticationManager"> <security:authentication-provider user-service-ref="userDetailsService" /> </security:authentication-manager> <bean id="userDetailsService" class="com.rreps.core.service.impl.UserDetailsServiceImpl" /> <!-- ACL stuff --> <bean id="aclCache" class="org.springframework.security.acls.domain.EhCacheBasedAclCache"> <constructor-arg> <bean class="org.springframework.cache.ehcache.EhCacheFactoryBean"> <property name="cacheManager"> <bean class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean"/> </property> <property name="cacheName" value="aclCache"/> </bean> </constructor-arg> </bean> <bean id="lookupStrategy" class="org.springframework.security.acls.jdbc.BasicLookupStrategy"> <constructor-arg ref="dataSource"/> <constructor-arg ref="aclCache"/> <constructor-arg> <bean class="org.springframework.security.acls.domain.AclAuthorizationStrategyImpl"> <constructor-arg> <list> <bean class="org.springframework.security.core.authority.GrantedAuthorityImpl"> <constructor-arg value="ROLE_ADMINISTRATEUR"/> </bean> <bean class="org.springframework.security.core.authority.GrantedAuthorityImpl"> <constructor-arg value="ROLE_ADMINISTRATEUR"/> </bean> <bean class="org.springframework.security.core.authority.GrantedAuthorityImpl"> <constructor-arg value="ROLE_ADMINISTRATEUR"/> </bean> </list> </constructor-arg> </bean> </constructor-arg> <constructor-arg> <bean class="org.springframework.security.acls.domain.ConsoleAuditLogger"/> </constructor-arg> </bean> <bean id="aclService" class="com.rreps.core.service.impl.MysqlJdbcMutableAclService"> <constructor-arg ref="dataSource"/> <constructor-arg ref="lookupStrategy"/> <constructor-arg ref="aclCache"/> </bean> The strange thing is that the context starts normally when deployed in a webapp and @PreAuthorize and @PostFilter annotations are working fine as well... Any idea what is wrong? Here is the end of the stacktrace : ... 55 more Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dataSource' defined in class path resource [applicationContext-core.xml]: Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.transaction.config.internalTransactionAdvisor': Cannot resolve reference to bean 'org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0' while setting bean property 'transactionAttributeSource'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0': Initialization of bean failed; nested exception is java.lang.NullPointerException at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:521) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:450) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:287) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:189) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:322) ... 67 more Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.transaction.config.internalTransactionAdvisor': Cannot resolve reference to bean 'org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0' while setting bean property 'transactionAttributeSource'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0': Initialization of bean failed; nested exception is java.lang.NullPointerException at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:328) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:106) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1308) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1067) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:511) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:450) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:287) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193) at org.springframework.aop.framework.autoproxy.BeanFactoryAdvisorRetrievalHelper.findAdvisorBeans(BeanFactoryAdvisorRetrievalHelper.java:86) at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findCandidateAdvisors(AbstractAdvisorAutoProxyCreator.java:100) at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findEligibleAdvisors(AbstractAdvisorAutoProxyCreator.java:86) at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.getAdvicesAndAdvisorsForBean(AbstractAdvisorAutoProxyCreator.java:68) at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.wrapIfNecessary(AbstractAutoProxyCreator.java:359) at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.postProcessAfterInitialization(AbstractAutoProxyCreator.java:322) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:404) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1409) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513) ... 73 more Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0': Initialization of bean failed; nested exception is java.lang.NullPointerException at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:521) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:450) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:287) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:189) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:322) ... 91 more Caused by: java.lang.NullPointerException at org.springframework.security.access.method.DelegatingMethodSecurityMetadataSource.getAttributes(DelegatingMethodSecurityMetadataSource.java:52) at org.springframework.security.access.intercept.aopalliance.MethodSecurityMetadataSourceAdvisor$MethodSecurityMetadataSourcePointcut.matches(MethodSecurityMetadataSourceAdvisor.java:129) at org.springframework.aop.support.AopUtils.canApply(AopUtils.java:215) at org.springframework.aop.support.AopUtils.canApply(AopUtils.java:252) at org.springframework.aop.support.AopUtils.findAdvisorsThatCanApply(AopUtils.java:284) at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findAdvisorsThatCanApply(AbstractAdvisorAutoProxyCreator.java:117) at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findEligibleAdvisors(AbstractAdvisorAutoProxyCreator.java:87) at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.getAdvicesAndAdvisorsForBean(AbstractAdvisorAutoProxyCreator.java:68) at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.wrapIfNecessary(AbstractAutoProxyCreator.java:359) at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.postProcessAfterInitialization(AbstractAutoProxyCreator.java:322) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:404) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1409) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513) ... 97 more

    Read the article

  • I never really understood: what is CGI?

    - by claws
    CGI is a Comman Gateway Interface. As the name says, it is a "common" gateway interface for everything. It is so trivial and naive from the name. I feel that I understood this and I felt this every time I encountered this word. But frankly, I didn't. I'm still confused. I am a PHP programmer. I did lot of web development. user (client) request for page --- webserver(-embedded PHP interpreter) ---- Server side(PHP) Script --- MySQL Server. Now say my PHP Script can fetch results from MySQL Server && MATLAB Server && Some other server. So, now PHP Script is the CGI? because its interface for the between webserver & All other servers? I don't know. Sometimes they call CGI, a technology & othertimes they call CGI a program or someother server. What exactly is CGI? Whats the big deal with /cgi-bin/*.cgi? Whats up with this? I don't know what is this cgi-bin directory on the server for. I don't know why they have *.cgi extensions. Why does Perl always comes in the way. CGI & Perl (language). I also don't know whats up with these two. Almost all the time I keep hearing these two in combination "CGI & Perl". This book is another great example CGI Programming with Perl Why not "CGI Programming with PHP/JSP/ASP". I never saw such things. CGI Programming in C this confuses me a lot. in C?? Seriously?? I don't know what to say. I"m just confused. "in C"?? This changes everything. Program needs to be compiled and executed. This entirely changes my view of web programming. When do I compile? How does the program gets executed (because it will be a machine code, so it must execute as a independent process). How does it communicate with the web server? IPC? and interfacing with all the servers (in my example MATLAB & MySQL) using socket programming? I'm lost!! They say that CGI is depreciated. Its no more in use. Is it so? What is its latest update? Once, I ran into a situation where I had to give HTTP PUT request access to web server (Apache HTTPD). Its a long back. So, as far as I remember this is what I did: Edited the configuration file of Apache HTTPD to tell webserver to pass all HTTP PUT requests to some put.php ( I had to write this PHP script) Implement put.php to handle the request (save the file to the location mentioned) People said that I wrote a CGI Script. Seriously, I didn't have clue what they were talking about. Did I really write CGI Script? I hope you understood what my confusion is. (Because I myself don't know where I'm confused). I request you guys to keep your answer as simple as possible. I really can't understand any fancy technical terminology. At least not in this case. EDIT: I found this amazing tutorial "CGI Programming Is Simple!" - CGI Tutorial Which explains the concepts in simplest possible way. I've only have one complaint about this tutorial. Just to make what ever he explained complete he should have shown the C code he used for generating response for those GET / POST requests. I've also added link to this tutorial to Wikipedia's article : http://en.wikipedia.org/wiki/Common_Gateway_Interface

    Read the article

  • java.sql.SQLException: Operation not allowed after ResultSet closed

    - by javatraniee
    Why am I getting an Resultset already closed error? public class Server implements Runnable { private static int port = 1600, maxConnections = 0; public static Connection connnew = null; public static Connection connnew1 = null; public static Statement stnew, stnew1, stnew2, stnew3, stnew4; public void getConnection() { try { Class.forName("org.gjt.mm.mysql.Driver"); connnew = DriverManager.getConnection("jdbc:mysql://localhost/db_alldata", "root", "flashkit"); connnew1 = DriverManager.getConnection("jdbc:mysql://localhost/db_main", "root", "flashkit"); stnew = connnew.createStatement(); stnew1 = connnew.createStatement(); stnew2 = connnew1.createStatement(); stnew3 = connnew1.createStatement(); stnew4 = connnew1.createStatement(); } catch (Exception e) { System.out.print("Get Connection Exception---" + new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").format(new Date()) + "----- > " + e); } } public void closeConnection() { try { if (!(connnew.isClosed())) { stnew.close(); stnew1.close(); connnew.close(); } if (!(connnew1.isClosed())) { stnew2.close(); stnew3.close(); stnew4.close(); connnew1.close(); } } catch (Exception e) { System.out.print("Close Connection Closing Exception-----" + new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").format(new Date()) + "------->" + e); } } Server() { try { } catch (Exception ee) { System.out.print("Server Exceptions in main connection--" + new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").format(new Date()) + "------>" + ee); } } public static void main(String[] args) throws SQLException { int i = 0; Server STUD = new Server(); STUD.getConnection(); try { ServerSocket listener = new ServerSocket(port); Socket server; while ((i++ < maxConnections) || (maxConnections == 0)) { @SuppressWarnings("unused") doComms connection; server = listener.accept(); try { ResultSet checkconnection = stnew4 .executeQuery("select count(*) from t_studentdetails"); if (checkconnection.next()) { // DO NOTHING IF EXCEPTION THEN CLOSE ALL CONNECTIONS AND OPEN NEW // CONNECTIONS } } catch (Exception e) { System.out.print("Db Connection Lost Closing And Re-Opning It--------" + new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").format(new Date()) + "--------" + e); STUD.closeConnection(); STUD.getConnection(); } doComms conn_c = new doComms(server, stnew, stnew1, stnew2, stnew3); Thread t = new Thread(conn_c); t.start(); } } catch (IOException ioe) { System.out.println("Main IOException on socket listen: " + ioe); } } public void run() { } } class doComms implements Runnable { private Socket server; private String input; static Connection conn = null; static Connection conn1 = null; static Statement st, st1, st2, st3; doComms(Socket server, Statement st, Statement st1, Statement st2, Statement st3) { this.server = server; doComms.st = st; doComms.st1 = st1; doComms.st2 = st2; doComms.st3 = st3; } @SuppressWarnings("deprecation") public void run() { input = ""; // char ch; try { DataInputStream in = new DataInputStream(server.getInputStream()); OutputStreamWriter outgoing = new OutputStreamWriter(server.getOutputStream()); while (!(null == (input = in.readLine()))) { savetodatabase(input, server.getPort(), outgoing); } } catch (IOException ioe) { System.out.println("RUN IOException on socket listen:-------" + new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").format(new Date()) + "----- " + ioe); ioe.printStackTrace(); } } public void savetodatabase(String line, int port1, OutputStreamWriter outgoing) { try { String Rollno = "-", name = "-", div = "-", storeddate = "-", storedtime = "-", mailfrom = ""; String newline = line; String unitid = "-"; storeddate = new SimpleDateFormat("yyyy-MM-dd").format(new java.util.Date()); storedtime = new SimpleDateFormat("HH:mm:ss").format(new java.util.Date()); String sql2 = "delete from t_currentport where PortNumber='" + port1 + "''"; st2.executeUpdate(sql2); sql2 = "insert into t_currentport (unitid, portnumber,thedate,thetime) values >('" + unitid + "','" + port1 + "','" + storeddate + "','" + storedtime + "')"; st2.executeUpdate(sql2); String tablename = GetTable(); String sql = "select * from t_studentdetails where Unitid='" + unitid + "'"; ResultSet rst = st2.executeQuery(sql); if (rst.next()) { Rollno = rst.getString("Rollno"); name = rst.getString("name"); div = rst.getString("div"); } String sql1 = "insert into studentInfo StoredDate,StoredTime,Subject,UnitId,Body,Status,Rollno,div,VehId,MailDate,MailTime,MailFrom,MailTo,Header,UnProcessedStamps) values('" + storeddate + "','" + storedtime + "','" + unitid + "','" + unitid + "','" + newline + "','Pending','" + Rollno + "','" + div + "','" + name + "','" + storeddate + "','" + storedtime + "','" + mailfrom + "','" + mailfrom + "','-','-')"; st1.executeUpdate(sql1); } catch (Exception e) { System.out.print("Save to db Connection Exception--" + new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").format(new Date()) + "-->" + e); } } }

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • Ruby number_to_currency displays a totally wrong number!

    - by SueP
    Greetings! I’m an old Delphi programmer making the leap to the Mac, Ruby, Rails, and web programming in general. I’m signed up for the Advanced Rails workshop at the end of the month. In the meantime, I’ve been working on porting a mission-critical (of course) app from Delphi to RAILS. It feels like I’ve spent most of the past year with my head buried in a book or podcast. Right now, I’ve hit a major issue and I’m tearing my hair out. I literally don’t know where to go with this, I desperately don’t want to deploy with this bug, and I’m feeling a bit frantic. (The company database is currently running on an ancient XP box that’s looking rustier by the day.) So, I set up a test database that shows the problem. I’m running: OS/X 10.6.3 Rails 2.3.5 ruby 1.8.7 (2009-06-08 patchlevel 173) [universal-darwin10.0] MySQL 5.1.38-log via socket MySQL Client Version 5.1.8 ActiveRecord::Schema.define(:version => 20100406222528) do create_table “money”, :force => true do |t| t.decimal “amount_due”, :precision => 10, :scale => 2, :default => 0.0 t.decimal “balance”, :precision => 10, :scale => 2, :default => 0.0 t.text “memofield” t.datetime “created_at” t.datetime “updated_at” end The index view is right out of the generator, slightly modified to add the formatting that's breaking on me. Listing money <table> <tr> <th>Amount</th> <th>Amount to_s </th> <th>Balance to $</th> <th>Balance with_precision </th> <th>Memofield</th> </tr> <% @money.each do |money| %> <tr> <td><%=h money.amount_due %></td> <td><%=h money.amount_due.to_s(‘F’) %></td> <td><%=h number_to_currency(money.balance) %></td> <td><%=h number_with_precision(money.balance, :precision => 2) %></td> <td><%=h money.memofield %></td> <td><%= link_to ‘Show’, money %></td> <td><%= link_to ‘Edit’, edit_money_path(money) %></td> <td><%= link_to ‘Destroy’, money, :confirm => ‘Are you sure?’, :method => :delete %></td> </tr> *<% end %> *</table> <%= link_to ‘New money’, new_money_path %> This seemed to work pretty well. Then I started testing with production data and hit a major problem with number_to_currency. The number in the database is: 10542.28, I verified it with the MySQL Query Browser. RAILS will display this as 10542.28 unless I call number_to_currency, then that number is displayed as: $15422.80 The error seems to happen with any number between 10,000.00 and 10,999.99 So far, I haven’t seen it outside of that range, but I obviously haven’t tested everything. I guess my workaround is to remove number_to_currency, but that leaves the views looking really sloppy and unprofessional. The formatting is messed up, things don’t line up properly and I can’t force the display to 2 decimal places. I’m seriously hoping there is an easy fix for this. I can’t imagine this being a widespread problem. It would affect so many people that someone would have fixed it! But I don’t know where to go from here. I’d desperately like some help. (Later – number_with_precision fails the same way number_to_currency does.) Sue Petersen

    Read the article

< Previous Page | 737 738 739 740 741 742 743 744 745 746 747 748  | Next Page >