Search Results

Search found 3171 results on 127 pages for 'manual manuel'.

Page 120/127 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • vsftpd not allowing uploads. 550 response.

    - by Josh
    I've set vsftpd up on a centos box. I keep trying to upload files but I keep getting "550 Failed to change directory" and "550 Could not get file size." Here's my vsftpd.conf # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=YES # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES anon_other_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log #xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=NO # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd whith two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES log_ftp_protocol=YES banner_file=/etc/vsftpd/issue local_root=/var/www guest_enable=YES guest_username=ftpusr ftp_username=nobody

    Read the article

  • How to set up port forwarding and firewall settings for torrents using Transmsission on Mac OSX 10.5

    - by Liz
    I have picked up bits of advice here and there on the internet and got someway through this tortuous exercise (after it took 18 hours to download the first torrent I tried yesterday - magnet-link for a film). Where I have got stuck is with configuring the firewall on the Netgear Router but I am not sure if I have caused the problem myself by something else I have done configuring the Mac System Preferences for Security or Networking. I have been following the sections of these instructions that seem to apply, although they are written for a different OSX version (don't know which one, but the screen shots do not match what I see) and I am not wanting to set up my Mac as a server and attending to the parts that apply to port forwarding for Netgear rather than LinkSys: http://homepage.mac.com/car1son/static_port_fwd_intro.html I have been trying to follow these instructions: Instructions for DG834, DG834G, DG824M, FR114W, FM114P, FR114P, FR328S, FVL328, FVS328, FVS338, FVX538, FWAG114, FWG114P, or FVS318v3 These routers do port forwarding by assigning port numbers to a "service" associated with the application you want to run. "Rules" are set for particular services. Rules block or allow access, based on various conditions such as the time of day and the name of the service. To Create a New Inbound or Outbound Rule 1. Submit the router's address in an Internet browser. (The default is 192.168.0.1). 2. Enter the router's username and password. 3. From the main menu, click Security > Rules. 4. Click Add for inbound or outbound traffic, as appropriate to the application you are planning to run. 5. Select the Service. The services the router knows about are listed in the drop down. If the service you want is not listed, add it as described in the next section. 6. Select the Action, for example ALLOW always. 7. For Send to LAN Server, enter the IP address of the local server. Note that this is also the IP address the computers on your LAN will access. 8. For WAN User choose Any, or limit access to particular IP addresses. 9. For Log selection it is reasonable to turn logs on, especially at the beginning when you are unsure of the result of the changes you are making. Later, you may want to set logs to "Never" for performance reasons. 10. Click Apply. As noted in user manual for some models: * Consider using the Dynamic DNS feature on the Advanced menu, so that external users can find your network when the DHCP lease is renewed by your ISP. * If your own LAN server uses DHCP, and your IPs change on rebooting, consider using the Reserved IP Address feature in the LAN IP menu. To Add a Service for These Routers 1. Click Security > Services > Add Custom Service. 2. Enter any name you choose for the service. 3. Select whether the service is to use TCP or UDP. If you are unsure, select both. 4. Enter the lowest port number used by the service. 5. Enter the highest port number used. If the service uses only one port number, enter the same number. 6. Click Apply. There is no "Security - Rules" submenu in the Netgear page, so I have been trying to access "Security - Firewall Rules". I can access everthing else in the Netgear settings as Admin but I cannot get the "Firewall Rules" section to open up. (I am not 100% sure I will know exactly what to do if and when I do get it opened up!) I haven't managed to find though searching the internet any instructions that would seem to apply specifically to what I am trying to achieve, so would be very grateful if someone could either point me in the right direction or give me some advice directly. Best wishes, Liz

    Read the article

  • fd partitions gone from 2 discs, md happy with it and resyncs. How to recover ?

    - by d0nd
    Hey gurus, need some help badly with this one. I run a server with a 6Tb md raid5 volume built over 7*1Tb disks. I've had to shut down the server lately and when it went back up, 2 out of the 7 disks used for the raid volume had lost its conf : dmesg : [ 10.184167] sda: sda1 sda2 sda3 // System disk [ 10.202072] sdb: sdb1 [ 10.210073] sdc: sdc1 [ 10.222073] sdd: sdd1 [ 10.229330] sde: sde1 [ 10.239449] sdf: sdf1 [ 11.099896] sdg: unknown partition table [ 11.255641] sdh: unknown partition table All 7 disks have same geometry and were configured alike : dmesg : Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x1e7481a5 Device Boot Start End Blocks Id System /dev/sdb1 1 121601 976760001 fd Linux raid autodetect All 7 disks (sdb1, sdc1, sdd1, sde1, sdf1, sdg1, sdh1) were used in a md raid5 xfs volume. When booting, md, which was (obviously) out of sync kicked in and automatically started rebuilding over the 7 disks, including the two "faulty" ones; xfs tried to do some shenanigans as well: dmesg : [ 19.566941] md: md0 stopped. [ 19.817038] md: bind<sdc1> [ 19.817339] md: bind<sdd1> [ 19.817465] md: bind<sde1> [ 19.817739] md: bind<sdf1> [ 19.817917] md: bind<sdh> [ 19.818079] md: bind<sdg> [ 19.818198] md: bind<sdb1> [ 19.818248] md: md0: raid array is not clean -- starting background reconstruction [ 19.825259] raid5: device sdb1 operational as raid disk 0 [ 19.825261] raid5: device sdg operational as raid disk 6 [ 19.825262] raid5: device sdh operational as raid disk 5 [ 19.825264] raid5: device sdf1 operational as raid disk 4 [ 19.825265] raid5: device sde1 operational as raid disk 3 [ 19.825267] raid5: device sdd1 operational as raid disk 2 [ 19.825268] raid5: device sdc1 operational as raid disk 1 [ 19.825665] raid5: allocated 7334kB for md0 [ 19.825667] raid5: raid level 5 set md0 active with 7 out of 7 devices, algorithm 2 [ 19.825669] RAID5 conf printout: [ 19.825670] --- rd:7 wd:7 [ 19.825671] disk 0, o:1, dev:sdb1 [ 19.825672] disk 1, o:1, dev:sdc1 [ 19.825673] disk 2, o:1, dev:sdd1 [ 19.825675] disk 3, o:1, dev:sde1 [ 19.825676] disk 4, o:1, dev:sdf1 [ 19.825677] disk 5, o:1, dev:sdh [ 19.825679] disk 6, o:1, dev:sdg [ 19.899787] PM: Starting manual resume from disk [ 28.663228] Filesystem "md0": Disabling barriers, not supported by the underlying device [ 28.663228] XFS mounting filesystem md0 [ 28.884433] md: resync of RAID array md0 [ 28.884433] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 28.884433] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. [ 28.884433] md: using 128k window, over a total of 976759936 blocks. [ 29.025980] Starting XFS recovery on filesystem: md0 (logdev: internal) [ 32.680486] XFS: xlog_recover_process_data: bad clientid [ 32.680495] XFS: log mount/recovery failed: error 5 [ 32.682773] XFS: log mount failed I ran fdisk and flagged sdg1 and sdh1 as fd. I tried to reassemble the array but it didnt work: no matter what was in mdadm.conf, it still uses sdg and sdh instead of sdg1 and sdh1. I checked in /dev and I see no sdg1 and and sdh1, shich explains why it wont use it. I just don't know why those partitions are gone from /dev and how to readd those... blkid : /dev/sda1: LABEL="boot" UUID="519790ae-32fe-4c15-a7f6-f1bea8139409" TYPE="ext2" /dev/sda2: TYPE="swap" /dev/sda3: LABEL="root" UUID="91390d23-ed31-4af0-917e-e599457f6155" TYPE="ext3" /dev/sdb1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdc1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdd1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sde1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdf1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdg: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdh: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" fdisk -l : Disk /dev/sda: 40.0 GB, 40020664320 bytes 255 heads, 63 sectors/track, 4865 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x8c878c87 Device Boot Start End Blocks Id System /dev/sda1 * 1 12 96358+ 83 Linux /dev/sda2 13 134 979965 82 Linux swap / Solaris /dev/sda3 135 4865 38001757+ 83 Linux Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x1e7481a5 Device Boot Start End Blocks Id System /dev/sdb1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xc9bdc1e9 Device Boot Start End Blocks Id System /dev/sdc1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xcc356c30 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe87f7a3d Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xb17a2d22 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x8f3bce61 Device Boot Start End Blocks Id System /dev/sdg1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xa98062ce Device Boot Start End Blocks Id System /dev/sdh1 1 121601 976760001 fd Linux raid autodetect I really dont know what happened nor how to recover from this mess. Needless to say the 5TB or so worth of data sitting on those disks are very valuable to me... Any idea any one? Did anybody ever experienced a similar situation or know how to recover from it ? Can someone help me? I'm really desperate... :x

    Read the article

  • How to install MariaDB rpms in CentOS 6.4 using rpm (not yum cmd) + handling mysql-libs conflicts

    - by Pat C
    I need to script the install of MariaDB using the rpm command in CentOS 6.4. I can't use yum since it's going to be an offline install so there's no access to the repository. The only MySQL package installed is mysql-libs as various other packages in CentOS depend on it. When I did a test install of MariaDB with yum it correctly accounted for mysql-libs and uninstalled it at the end as MariaDB could handle the dependencies after it was installed: [root@new-host-6 ~]# yum install MariaDB-client MariaDB-common MariaDB-compat MariaDB-devel MariaDB-server MariaDB-shared Loaded plugins: downloadonly, fastestmirror, refresh-packagekit, security, verify Loading mirror speeds from cached hostfile * base: mirrors.kernel.org * extras: mirror.keystealth.org * updates: mirror.umd.edu Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package MariaDB-client.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-common.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-compat.x86_64 0:5.5.32-1 will be obsoleting ---> Package MariaDB-devel.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-server.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-shared.x86_64 0:5.5.32-1 will be obsoleting ---> Package mysql-libs.x86_64 0:5.1.66-2.el6_3 will be obsoleted --> Finished Dependency Resolution Dependencies Resolved ==================================================================================================================================================================== Package Arch Version Repository Size ==================================================================================================================================================================== Installing: MariaDB-client x86_64 5.5.32-1 mariadb 10 M MariaDB-common x86_64 5.5.32-1 mariadb 23 k MariaDB-compat x86_64 5.5.32-1 mariadb 2.7 M replacing mysql-libs.x86_64 5.1.66-2.el6_3 MariaDB-devel x86_64 5.5.32-1 mariadb 5.6 M MariaDB-server x86_64 5.5.32-1 mariadb 34 M MariaDB-shared x86_64 5.5.32-1 mariadb 1.1 M replacing mysql-libs.x86_64 5.1.66-2.el6_3 Transaction Summary ==================================================================================================================================================================== Install 6 Package(s) Total download size: 53 M Is this ok [y/N]: y Downloading Packages: (1/6): MariaDB-5.5.32-centos6-x86_64-client.rpm | 10 MB 00:06 (2/6): MariaDB-5.5.32-centos6-x86_64-common.rpm | 23 kB 00:00 (3/6): MariaDB-5.5.32-centos6-x86_64-compat.rpm | 2.7 MB 00:02 (4/6): MariaDB-5.5.32-centos6-x86_64-devel.rpm | 5.6 MB 00:06 (5/6): MariaDB-5.5.32-centos6-x86_64-server.rpm | 34 MB 00:23 (6/6): MariaDB-5.5.32-centos6-x86_64-shared.rpm | 1.1 MB 00:00 -------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 1.3 MB/s | 53 MB 00:40 warning: rpmts_HdrFromFdno: Header V4 DSA/SHA1 Signature, key ID 1bb943db: NOKEY Retrieving key from https://yum.mariadb.org/RPM-GPG-KEY-MariaDB Importing GPG key 0x1BB943DB: Userid: "Daniel Bartholomew (Monty Program signing key) <[email protected]>" From : https://yum.mariadb.org/RPM-GPG-KEY-MariaDB Is this ok [y/N]: y Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Warning: RPMDB altered outside of yum. Installing : MariaDB-compat-5.5.32-1.x86_64 1/7 Installing : MariaDB-common-5.5.32-1.x86_64 2/7 Installing : MariaDB-server-5.5.32-1.x86_64 3/7 chown: cannot access `/var/lib/mysql': No such file or directory PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! To do so, start the server, then issue the following commands: '/usr/bin/mysqladmin' -u root password 'new-password' '/usr/bin/mysqladmin' -u root -h new-host-6 password 'new-password' Alternatively you can run: '/usr/bin/mysql_secure_installation' which will also give you the option of removing the test databases and anonymous user created by default. This is strongly recommended for production servers. See the MariaDB Knowledgebase at http://kb.askmonty.org or the MySQL manual for more instructions. Please report any problems with the '/usr/bin/mysqlbug' script! The latest information about MariaDB is available at http://mariadb.org/. You can find additional information about the MySQL part at: http://dev.mysql.com Support MariaDB development by buying support/new features from Monty Program Ab. You can contact us about this at [email protected]. Alternatively consider joining our community based development effort: http://kb.askmonty.org/en/contributing-to-the-mariadb-project/ Installing : MariaDB-devel-5.5.32-1.x86_64 4/7 Installing : MariaDB-client-5.5.32-1.x86_64 5/7 Installing : MariaDB-shared-5.5.32-1.x86_64 6/7 Erasing : mysql-libs-5.1.66-2.el6_3.x86_64 7/7 Verifying : MariaDB-common-5.5.32-1.x86_64 1/7 Verifying : MariaDB-server-5.5.32-1.x86_64 2/7 Verifying : MariaDB-devel-5.5.32-1.x86_64 3/7 Verifying : MariaDB-client-5.5.32-1.x86_64 4/7 Verifying : MariaDB-compat-5.5.32-1.x86_64 5/7 Verifying : MariaDB-shared-5.5.32-1.x86_64 6/7 Verifying : mysql-libs-5.1.66-2.el6_3.x86_64 7/7 Installed: MariaDB-client.x86_64 0:5.5.32-1 MariaDB-common.x86_64 0:5.5.32-1 MariaDB-compat.x86_64 0:5.5.32-1 MariaDB-devel.x86_64 0:5.5.32-1 MariaDB-server.x86_64 0:5.5.32-1 MariaDB-shared.x86_64 0:5.5.32-1 Replaced: mysql-libs.x86_64 0:5.1.66-2.el6_3 Complete! My question is, what is the equivalent way to install the MariaDB packages using the rpm command only as opposed to yum? If I do rpm -ivh MariaDB*.rpm, I will get a ton of messages like the following about conflicts with mysql-libs: file /etc/my.cnf from install of MariaDB-common-5.5.32-1.x86_64 conflicts with file from package mysql-libs-5.1.66-2.el6_3.x86_64 file /usr/share/mysql/charsets/Index.xml from install of MariaDB-common-5.5.32-1.x86_64 conflicts with file from package mysql-libs-5.1.66-2.el6_3.x86_64 I then used the --force option to install the MariaDB rpms and uninstalled mysql-lib, I didn't get any weird messages but I'm not sure that is the cleanest method to handle the conflicts and do the install. So can someone confirm that installing MariaDB with the following rpm commands would be the same as using yum to install the packages and handle mysql-libs conflicts/removal: rpm -ivh --force MariaDB*.rpm rpm -e mysql-libs Thanks for any input!

    Read the article

  • local user cannot access vsftpd server

    - by Zloy Smiertniy
    I'm currently running a vsftpd server and I added the necessary configurations in vsftpd.conf so that local users can use clients like FileZilla to manage their homes in a server. I found out that only users in the sudoers list access without a problem only they can't download the files, but users that are not sudoers cannot even access their homes from a client but they can access by a web browser using the FTP protocol and they can only access their home directories (as intented) Im running a fedora 14 on my server and my vsftpd.conf looks like this: # Example config file /etc/vsftpd/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=NO # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log #xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. ascii_upload_enable=YES ascii_download_enable=YES # # You may fully customise the login banner string: ftpd_banner=Welcome to GAMBITA FTP service # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). chroot_local_user=YES chroot_list_enable=YES # (default follows) chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd with two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES use_localtime=YES Anyone has an idea of what might be happening? Nothing concerning vsftpd is written in any log

    Read the article

  • OpenVPN IPv6 over IPv4 tunnel

    - by user66779
    Today I installed OpenVPN 2.3rc2 on both my windows 7 client machine and centos 6 server. This new version of OpenVPN provides full compatibility for IPv6. The Problem: I am currently able to connect to the server (through the IPv4 tunnel) and ping the IPv6 address which is assigned to my client and I can also ping the tun0 interface on the server. However, I cannot browse to any IPv6 websites. My vps provider has given me this: 2607:f840:0044:0022:0000:0000:0000:0000/64 is routed to this server (2607:f840:0:3f:0:0:0:eda). This is ifconfig after setup with OpenVPN running: eth0 Link encap:Ethernet HWaddr 00:16:3E:12:77:54 inet addr:208.111.39.160 Bcast:208.111.39.255 Mask:255.255.255.0 inet6 addr: 2607:f740:0:3f::eda/64 Scope:Global inet6 addr: fe80::216:3eff:fe12:7754/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2317253 errors:0 dropped:7263 overruns:0 frame:0 TX packets:1977414 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1696120096 (1.5 GiB) TX bytes:1735352992 (1.6 GiB) Interrupt:29 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 inet6 addr: 2607:f740:44:22::1/64 Scope:Global UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:739567 errors:0 dropped:0 overruns:0 frame:0 TX packets:1218240 errors:0 dropped:1542 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:46512557 (44.3 MiB) TX bytes:1559930874 (1.4 GiB) So OpenVPN is sucessfully creating a tun0 interface and assigning clients IPv6 addresses using 2607:f840:44:22::/64. The first client to connect is getting 2607:f840:44:22::1000 and the second 2607:f840:44:22::1001, and so on... plus 1 each time. After connecting as the first client, I can ping from my windows client machine 2607:f740:44:22::1 and 2607:f740:44:22::1000. However, I have no access to IPv6 websites. I believe the problem is that the tun0 IPv6 addressees are not being forwarded to the eth0 interface. This is the firewall running on the server: #!/bin/sh # # iptables configuration script # # Flush all current rules from iptables # iptables -F iptables -t nat -F # # Allow SSH connections on tcp port 22 # iptables -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -j ACCEPT # # Set access for localhost # iptables -A INPUT -i lo -j ACCEPT # # Accept connections on 1195 for vpn access from client # iptables -A INPUT -i eth0 -p udp --dport 1195 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p udp --sport 1195 -m state --state ESTABLISHED -j ACCEPT # # Apply forwarding for OpenVPN Tunneling # iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to 209.111.39.160 iptables -A FORWARD -j REJECT # # Enable forwarding # echo 1 > /proc/sys/net/ipv4/ip_forward # # Set default policies for INPUT, FORWARD and OUTPUT chains # iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT # # IPv6 # IP6TABLES=/sbin/ip6tables $IP6TABLES -F INPUT $IP6TABLES -F FORWARD $IP6TABLES -F OUTPUT echo -n "1" >/proc/sys/net/ipv6/conf/all/forwarding echo -n "1" >/proc/sys/net/ipv6/conf/all/proxy_ndp echo -n "0" >/proc/sys/net/ipv6/conf/all/autoconf echo -n "0" >/proc/sys/net/ipv6/conf/all/accept_ra $IP6TABLES -A INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT $IP6TABLES -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT $IP6TABLES -A INPUT -i eth0 -p icmpv6 -j ACCEPT $IP6TABLES -P INPUT ACCEPT $IP6TABLES -P FORWARD ACCEPT $IP6TABLES -P OUTPUT ACCEPT Server.conf: server-ipv6 2607:f840:44:22::/64 server 10.8.0.0 255.255.255.0 port 1195 proto udp dev tun ca ca.crt cert server.crt key server.key dh dh2048.pem ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 208.67.222.222" push "dhcp-option DNS 208.67.220.220" keepalive 10 60 tls-auth ta.key 0 cipher AES-256-CBC comp-lzo user nobody group nobody persist-key persist-tun status openvpn-status.log log-append openvpn.log verb 5 Client.conf: client dev tun nobind keepalive 10 60 hand-window 15 remote 209.111.39.160 1195 udp persist-key persist-tun ca ca.crt key client1.key cert client1.crt remote-cert-tls server tls-auth ta.key 1 comp-lzo verb 3 cipher AES-256-CBC I'm not sure where I am going wrong, it could be the firewall, or something missing from server or client.conf. This version of OpenVPN was only released yesterday, and there's little info on the internet about how to setup an IPv6 over IPv4 vpn tunnel. I've read the manual for this new version of OpenVPN (parts pertaining to IPv6) and it provides very little info too. Thanks for any help.

    Read the article

  • Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

    - by Dave Jarvis
    A long time ago in a galaxy far, far away... Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how to migrate the structure. I have found very little documentation on migrating the data. The schema has 13 tables (which have been migrated successfully) and 9 GB of data. MySQL version: 5.1.x PostgreSQL version: 8.4.x I want to use the R programming language to analyze the data using SQL select statements; PostgreSQL has PL/R, but MySQL has nothing (as far as I can tell). A New Hope Create the database location (/var has insufficient space; also dislike having the PostgreSQL version number everywhere -- upgrading would break scripts!): sudo mkdir -p /home/postgres/main sudo cp -Rp /var/lib/postgresql/8.4/main /home/postgres sudo chown -R postgres.postgres /home/postgres sudo chmod -R 700 /home/postgres sudo usermod -d /home/postgres/ postgres All good to here. Next, restart the server and configure the database using these installation instructions: sudo apt-get install postgresql pgadmin3 sudo /etc/init.d/postgresql-8.4 stop sudo vi /etc/postgresql/8.4/main/postgresql.conf Change data_directory to /home/postgres/main sudo /etc/init.d/postgresql-8.4 start sudo -u postgres psql postgres \password postgres sudo -u postgres createdb climate pgadmin3 Use pgadmin3 to configure the database and create a schema. The episode continues in a remote shell known as bash, with both databases running, and the installation of a set of tools with a rather unusual logo: SQL Fairy. perl Makefile.PL sudo make install sudo apt-get install perl-doc (strangely, it is not called perldoc) perldoc SQL::Translator::Manual Extract a PostgreSQL-friendly DDL and all the MySQL data: sqlt -f DBI --dsn dbi:mysql:climate --db-user user --db-password password -t PostgreSQL > climate-pg-ddl.sql mysqldump --skip-add-locks --complete-insert --no-create-db --no-create-info --quick --result-file="climate-my.sql" --databases climate --skip-comments -u root -p The Database Strikes Back Recreate the structure in PostgreSQL as follows: pgadmin3 (switch to it) Click the Execute arbitrary SQL queries icon Open climate-pg-ddl.sql Search for TABLE " replace with TABLE climate." (insert the schema name climate) Search for on " replace with on climate." (insert the schema name climate) Press F5 to execute This results in: Query returned successfully with no result in 122 ms. Replies of the Jedi At this point I am stumped. Where do I go from here (what are the steps) to convert climate-my.sql to climate-pg.sql so that they can be executed against PostgreSQL? How to I make sure the indexes are copied over correctly (to maintain referential integrity; I don't have constraints at the moment to ease the transition)? How do I ensure that adding new rows in PostgreSQL will start enumerating from the index of the last row inserted (and not conflict with an existing primary key from the sequence)? How do you ensure the schema name comes through when transforming the data from MySQL to PostgreSQL inserts? Resources A fair bit of information was needed to get this far: https://help.ubuntu.com/community/PostgreSQL http://articles.sitepoint.com/article/site-mysql-postgresql-1 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL http://pgfoundry.org/frs/shownotes.php?release_id=810 http://sqlfairy.sourceforge.net/ Thank you!

    Read the article

  • jsf application in jboss web server

    - by chetan
    I try to run jsf application in myeclipse using jboss web server and following error while running jboss server. ERROR [AbstractKernelController] Error installing to Parse: name=vfsfile:/E:/ctn%20sodtware/jboss-5.0.1.GA/server/default/deploy/3aprwebdemo.war/ state=Not Installed mode=Manual requiredState=Parse org.jboss.deployers.spi.DeploymentException: Error creating managed object for vfsfile:/E:/ctn%20sodtware/jboss-5.0.1.GA/server/default/deploy/3aprwebdemo.war/ at org.jboss.deployers.spi.DeploymentException.rethrowAsDeploymentException(DeploymentException.java:49) at org.jboss.deployers.spi.deployer.helpers.AbstractParsingDeployerWithOutput.createMetaData(AbstractParsingDeployerWithOutput.java:337) at org.jboss.deployers.spi.deployer.helpers.AbstractParsingDeployerWithOutput.createMetaData(AbstractParsingDeployerWithOutput.java:297) at org.jboss.deployers.spi.deployer.helpers.AbstractParsingDeployerWithOutput.createMetaData(AbstractParsingDeployerWithOutput.java:269) at org.jboss.deployers.spi.deployer.helpers.AbstractParsingDeployerWithOutput.deploy(AbstractParsingDeployerWithOutput.java:230) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:171) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1439) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1157) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1098) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1598) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1062) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:781) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:698) at org.jboss.system.server.profileservice.ProfileServiceBootstrap.loadProfile(ProfileServiceBootstrap.java:304) at org.jboss.system.server.profileservice.ProfileServiceBootstrap.start(ProfileServiceBootstrap.java:205) at org.jboss.bootstrap.AbstractServerImpl.start(AbstractServerImpl.java:405) at org.jboss.Main.boot(Main.java:209) at org.jboss.Main$1.run(Main.java:547) at java.lang.Thread.run(Thread.java:619) Caused by: org.jboss.xb.binding.JBossXBException: Failed to parse source: cvc-datatype-valid.1.2.1: '3aprwebdemo' is not a valid value for 'NCName'. @ vfsfile:/E:/ctn%20sodtware/jboss-5.0.1.GA/server/default/deploy/3aprwebdemo.war/WEB-INF/web.xml[5,20] at org.jboss.xb.binding.parser.sax.SaxJBossXBParser.parse(SaxJBossXBParser.java:203) at org.jboss.xb.binding.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:168) at org.jboss.deployers.vfs.spi.deployer.JBossXBDeployerHelper.parse(JBossXBDeployerHelper.java:199) at org.jboss.deployers.vfs.spi.deployer.JBossXBDeployerHelper.parse(JBossXBDeployerHelper.java:170) at org.jboss.deployers.vfs.spi.deployer.SchemaResolverDeployer.parse(SchemaResolverDeployer.java:132) at org.jboss.deployers.vfs.spi.deployer.SchemaResolverDeployer.parse(SchemaResolverDeployer.java:118) at org.jboss.deployers.vfs.spi.deployer.AbstractVFSParsingDeployer.parseAndInit(AbstractVFSParsingDeployer.java:256) at org.jboss.deployers.vfs.spi.deployer.AbstractVFSParsingDeployer.parse(AbstractVFSParsingDeployer.java:188) at org.jboss.deployers.spi.deployer.helpers.AbstractParsingDeployerWithOutput.createMetaData(AbstractParsingDeployerWithOutput.java:323) ... 22 more Caused by: org.xml.sax.SAXException: cvc-datatype-valid.1.2.1: '3aprwebdemo' is not a valid value for 'NCName'. @ vfsfile:/E:/ctn%20sodtware/jboss-5.0.1.GA/server/default/deploy/3aprwebdemo.war/WEB-INF/web.xml[5,20] at org.jboss.xb.binding.parser.sax.SaxJBossXBParser$MetaDataErrorHandler.error(SaxJBossXBParser.java:426) at org.apache.xerces.util.ErrorHandlerWrapper.error(Unknown Source) at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source) at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source) at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source) at org.apache.xerces.impl.xs.XMLSchemaValidator$XSIErrorReporter.reportError(Unknown Source) at org.apache.xerces.impl.xs.XMLSchemaValidator.reportSchemaError(Unknown Source) at org.apache.xerces.impl.xs.XMLSchemaValidator.processOneAttribute(Unknown Source) at org.apache.xerces.impl.xs.XMLSchemaValidator.processAttributes(Unknown Source) at org.apache.xerces.impl.xs.XMLSchemaValidator.handleStartElement(Unknown Source) at org.apache.xerces.impl.xs.XMLSchemaValidator.startElement(Unknown Source) at org.apache.xerces.xinclude.XIncludeHandler.startElement(Unknown Source) at org.apache.xerces.impl.XMLNSDocumentScannerImpl.scanStartElement(Unknown Source) at org.apache.xerces.impl.XMLNSDocumentScannerImpl$NSContentDispatcher.scanRootElementHook(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source) at org.jboss.xb.binding.parser.sax.SaxJBossXBParser.parse(SaxJBossXBParser.java:199) ... 30 more

    Read the article

  • What git branching models actually work - the final question

    - by UncleCJ
    In our company we have successfully deployed git and we are currently using a simple trunk/release/hotfixes branching model. However, this has it's problems, I have some key issues of confusion in the community which would be awesome to have answered here. Maybe my hopes for an Alexander stroke are too great, quite possibly I'll decompose this question into more manageable issues, but here's my first shot. Workflows / branching models - below are the three main descriptions of this I have seen, but they are partially contradicting each other or don't go far enough to sort out the subsequent issues we've run into (as described below). Thus our team so far defaults to not so great solutions. Are you doing something better? gitworkflows(7) Manual Page (nvie) A successful Git branching model (reinh) A Git Workflow for Agile Teams Merging vs rebasing (tangled vs sequential history) - the bids on this are as confusing as it gets. Should one pull --rebase or wait with merging back to the mainline until your task is finished? Personally I lean towards merging since this preserves a visual illustration of on which base a task was started and finished, and I even prefer merge --no-ff for this purpose. It has other drawbacks however. Also many haven't realized the useful property of merging - that it isn't commutative (merging a topic branch into master does not mean merging master into the topic branch). I am looking for a natural workflow - sometimes mistakes happen because our procedures don't capture a specific situation with simple rules. For example a fix needed for earlier releases should of course be based sufficiently downstream to be possible to merge upstream into all branches necessary (is the usage of these terms clear enough?). However it happens that a fix makes it into the master before the developer realizes it should have been placed further downstream, and if that is already pushed (even worse, merged or something based on it) then the option remaining is cherry-picking, with it's associated perils... What simple rules like such do you use? Also in this is included the awkwardness of one topic branch necessarily excluding other topic branches (assuming they are branched from a common baseline). Developers don't want to finish a feature to start another one feeling like the code they just wrote is not there anymore How to avoid creating merge conflicts (due to cherry-pick)? What seems like a sure way to create a merge conflict is to cherry-pick between branches, they can never be merged again? Would applying the same commit in revert (how to do this?) in either branch possibly solve this situation? This is one reason I do not dare to push for a largely merge-based workflow. How to decompose into topical branches? - We realize that it would be awesome to assemble a finished integration from topic branches, but often work by our developers is not clearly defined (sometimes as simple as "poking around") and if some code has already gone into a "misc" topic, it can not be taken out of there again, according to the question above? How do you work with defining/approving/graduating/releasing your topic branches? Proper procedures like code review and graduating would of course be lovely, but we simply cannot keep things untangled enough to manage this - any suggestions? integration branches, illustration please? Vote and comment as much as you'd like, I'll try to keep the issue page clear and informative enough. Thanks! Below is a list of related topics on stackoverflow I have checked out: What are some good strategies to allow deployed applications to be hotfixable? Workflow description for git usage for in-house development Git workflow for corporate Linux kernel development How do you maintain development code and production code? (thanks for this PDF!) git releases management Git Cherry-pick vs Merge Workflow How to cherry-pick multiple commits How do you merge selective files with git-merge? How to cherry pick a range of commits and merge into another branch ReinH Git Workflow git workflow for making modifications you’ll never push back to origin Cherry-pick a merge Proper Git workflow for combined OS and Private code? Maintaining Project with Git Why cant Git merge file changes with a modified parent/master. Git branching / rebasing good practices When will "git pull --rebase" get me in to trouble?

    Read the article

  • just can't get a controller to work

    - by Asaf
    I try to get into mysite/user so that application/classes/controller/user.php should be working, now this is my file tree: code of controller/user.php: <?php defined('SYSPATH') OR die('No direct access allowed.'); class Controller_User extends Controller_Default { public $template = 'user'; function action_index() { //$view = View::factory('user'); //$view->render(TRUE); $this->template->message = 'hello, world!'; } } ?> code of controller/default.php: <?php defined('SYSPATH') OR die('No direct access allowed.'); class Controller_default extends Controller_Template { } bootstrap.php: <?php defined('SYSPATH') or die('No direct script access.'); //-- Environment setup -------------------------------------------------------- /** * Set the default time zone. * * @see http://kohanaframework.org/guide/using.configuration * @see http://php.net/timezones */ date_default_timezone_set('America/Chicago'); /** * Set the default locale. * * @see http://kohanaframework.org/guide/using.configuration * @see http://php.net/setlocale */ setlocale(LC_ALL, 'en_US.utf-8'); /** * Enable the Kohana auto-loader. * * @see http://kohanaframework.org/guide/using.autoloading * @see http://php.net/spl_autoload_register */ spl_autoload_register(array('Kohana', 'auto_load')); /** * Enable the Kohana auto-loader for unserialization. * * @see http://php.net/spl_autoload_call * @see http://php.net/manual/var.configuration.php#unserialize-callback-func */ ini_set('unserialize_callback_func', 'spl_autoload_call'); //-- Configuration and initialization ----------------------------------------- /** * Initialize Kohana, setting the default options. * * The following options are available: * * - string base_url path, and optionally domain, of your application NULL * - string index_file name of your index file, usually "index.php" index.php * - string charset internal character set used for input and output utf-8 * - string cache_dir set the internal cache directory APPPATH/cache * - boolean errors enable or disable error handling TRUE * - boolean profile enable or disable internal profiling TRUE * - boolean caching enable or disable internal caching FALSE */ Kohana::init(array( 'base_url' => '/mysite/', 'index_file' => FALSE, )); /** * Attach the file write to logging. Multiple writers are supported. */ Kohana::$log->attach(new Kohana_Log_File(APPPATH.'logs')); /** * Attach a file reader to config. Multiple readers are supported. */ Kohana::$config->attach(new Kohana_Config_File); /** * Enable modules. Modules are referenced by a relative or absolute path. */ Kohana::modules(array( 'auth' => MODPATH.'auth', // Basic authentication 'cache' => MODPATH.'cache', // Caching with multiple backends 'codebench' => MODPATH.'codebench', // Benchmarking tool 'database' => MODPATH.'database', // Database access 'image' => MODPATH.'image', // Image manipulation 'orm' => MODPATH.'orm', // Object Relationship Mapping 'pagination' => MODPATH.'pagination', // Paging of results 'userguide' => MODPATH.'userguide', // User guide and API documentation )); /** * Set the routes. Each route must have a minimum of a name, a URI and a set of * defaults for the URI. */ Route::set('default', '(<controller>(/<action>(/<id>)))') ->defaults(array( 'controller' => 'welcome', 'action' => 'index', )); /** * Execute the main request. A source of the URI can be passed, eg: $_SERVER['PATH_INFO']. * If no source is specified, the URI will be automatically detected. */ echo Request::instance() ->execute() ->send_headers() ->response; ?> .htaccess: RewriteEngine On RewriteBase /mysite/ RewriteRule ^(application|modules|system) - [F,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php/$0 [PT,L] Trying to go to http://localhost/ makes the "hello world" page, from the welcome.php Trying to go to http://localhost/mysite/user give me this: The requested URL /mysite/user was not found on this server.

    Read the article

  • Possible bug in ASP.NET MVC with form values being replaced.

    - by Dan Atkinson
    I appear to be having a problem with ASP.NET MVC in that, if I have more than one form on a page which uses the same name in each one, but as different types (radio/hidden/etc), then, when the first form posts (I choose the 'Date' radio button for instance), if the form is re-rendered (say as part of the results page), I seem to have the issue that the hidden value of the SearchType on the other forms is changed to the last radio button value (in this case, SearchType.Name). Below is an example form for reduction purposes. <% Html.BeginForm("Search", "Search", FormMethod.Post); %> <%= Html.RadioButton("SearchType", SearchType.Date, true) %> <%= Html.RadioButton("SearchType", SearchType.Name) %> <input type="submit" name="submitForm" value="Submit" /> <% Html.EndForm(); %> <% Html.BeginForm("Search", "Search", FormMethod.Post); %> <%= Html.Hidden("SearchType", SearchType.Colour) %> <input type="submit" name="submitForm" value="Submit" /> <% Html.EndForm(); %> <% Html.BeginForm("Search", "Search", FormMethod.Post); %> <%= Html.Hidden("SearchType", SearchType.Reference) %> <input type="submit" name="submitForm" value="Submit" /> <% Html.EndForm(); %> Resulting page source (this would be part of the results page) <form action="/Search/Search" method="post"> <input type="radio" name="SearchType" value="Date" /> <input type="radio" name="SearchType" value="Name" /> <input type="submit" name="submitForm" value="Submit" /> </form> <form action="/Search/Search" method="post"> <input type="hidden" name="SearchType" value="Name" /> <!-- Should be Colour --> <input type="submit" name="submitForm" value="Submit" /> </form> <form action="/Search/Search" method="post"> <input type="hidden" name="SearchType" value="Name" /> <!-- Should be Reference --> <input type="submit" name="submitForm" value="Submit" /> </form> Please can anyone else with RC1 confirm this? Maybe it's because I'm using an enum. I don't know. I should add that I can circumvent this issue by using 'manual' input () tags for the hidden fields, but if I use MVC tags (<%= Html.Hidden(...) %), .NET MVC replaces them every time. Many thanks. Update: I've seen this bug again today. It seems that this crops its head when you return a posted page and use MVC set hidden form tags with the Html helper. I've contacted Phil Haack about this, because I don't know where else to turn, and I don't believe that this should be expected behaviour as specified by David.

    Read the article

  • zLib on iPhone, stop at first BLOCK

    - by cedric
    I am trying to call iPhone zLib to decompress the zlib stream from our HTTP based server, but the code always stop after finishing the first zlib block. Obviously, iPhone SDK is using the standard open Zlib. My doubt is that the parameter for inflateInit2 is not appropriate here. I spent lots of time reading the zlib manual, but it isn't that helpful. Here is the details, your help is appreciated. (1) the HTTP request: NSURL *url = [NSURL URLWithString:@"http://192.168.0.98:82/WIC?query=getcontacts&PIN=12345678&compression=Y"]; (2) The data I get from server is something like this (if decompressed). The stream was compressed by C# zlib class DeflateStream: $REC_TYPE=SYS Status=OK Message=OK SetID= IsLast=Y StartIndex=0 LastIndex=6 EOR ...... $REC_TYPE=CONTACTSDISTLIST ID=2 Name=CTU+L%2EA%2E OnCallEnabled=Y OnCallMinUsers=1 OnCallEditRight= OnCallEditDLRight=D Fields= CL= OnCallStatus= EOR (3) However, I will only get the first Block. The code for decompression on iPhone (copied from a code piece from somewhere here) is as follow. The loop between Line 23~38 always break the second time execution. + (NSData *) uncompress: (NSData*) data { 1 if ([data length] == 0) return nil; 2 NSInteger length = [data length]; 3 unsigned full_length = length; 4 unsigned half_length =length/ 2; 5 NSMutableData *decompressed = [NSMutableData dataWithLength: 5*full_length + half_length]; 6 BOOL done = NO; 7 int status; 8 z_stream strm; 9 length=length-4; 10 void* bytes= malloc(length); 11 NSRange range; 12 range.location=4; 13 range.length=length; 14 [data getBytes: bytes range: range]; 15 strm.next_in = bytes; 16 strm.avail_in = length; 17 strm.total_out = 0; 18 strm.zalloc = Z_NULL; 19 strm.zfree = Z_NULL; 20 strm.data_type= Z_BINARY; 21 // if (inflateInit(&strm) != Z_OK) return nil; 22 if (inflateInit2(&strm, (-15)) != Z_OK) return nil; //It won't work if change -15 to positive numbers. 23 while (!done) 24 { 25 // Make sure we have enough room and reset the lengths. 26 if (strm.total_out >= [decompressed length]) 27 [decompressed increaseLengthBy: half_length]; 28 strm.next_out = [decompressed mutableBytes] + strm.total_out; 29 strm.avail_out = [decompressed length] - strm.total_out; 30 31 // Inflate another chunk. 32 status = inflate (&strm, Z_SYNC_FLUSH); //Z_SYNC_FLUSH-->Z_BLOCK, won't work either 33 if (status == Z_STREAM_END){ 34 35 done = YES; 36 } 37 else if (status != Z_OK) break; 38 } 39 if (inflateEnd (&strm) != Z_OK) return nil; 40 // Set real length. 41 if (done) 42 { 43 [decompressed setLength: strm.total_out]; 44 return [NSData dataWithData: decompressed]; 45 } 46 else return nil; 47 }

    Read the article

  • Adding functionality to any TextReader

    - by strager
    I have a Location class which represents a location somewhere in a stream. (The class isn't coupled to any specific stream.) The location information will be used to match tokens to location in the input in my parser, to allow for nicer error reporting to the user. I want to add location tracking to a TextReader instance. This way, while reading tokens, I can grab the location (which is updated by the TextReader as data is read) and give it to the token during the tokenization process. I am looking for a good approach on accomplishing this goal. I have come up with several designs. Manual location tracking Every time I need to read from the TextReader, I call AdvanceString on the Location object of the tokenizer with the data read. Advantages Very simple. No class bloat. No need to rewrite the TextReader methods. Disadvantages Couples location tracking logic to tokenization process. Easy to forget to track something (though unit testing helps with this). Bloats existing code. Plain TextReader wrapper Create a LocatedTextReaderWrapper class which surrounds each method call, tracking a Location property. Example: public class LocatedTextReaderWrapper : TextReader { private TextReader source; public Location Location { get; set; } public LocatedTextReaderWrapper(TextReader source) : this(source, new Location()) { } public LocatedTextReaderWrapper(TextReader source, Location location) { this.Location = location; this.source = source; } public override int Read(char[] buffer, int index, int count) { int ret = this.source.Read(buffer, index, count); if(ret >= 0) { this.location.AdvanceString(string.Concat(buffer.Skip(index).Take(count))); } return ret; } // etc. } Advantages Tokenization doesn't know about Location tracking. Disadvantages User needs to create and dispose a LocatedTextReaderWrapper instance, in addition to their TextReader instance. Doesn't allow different types of tracking or different location trackers to be added without layers of wrappers. Event-based TextReader wrapper Like LocatedTextReaderWrapper, but decouples it from the Location object raising an event whenever data is read. Advantages Can be reused for other types of tracking. Tokenization doesn't know about Location tracking or other tracking. Can have multiple, independent Location objects (or other methods of tracking) tracking at once. Disadvantages Requires boilerplate code to enable location tracking. User needs to create and dispose the wrapper instance, in addition to their TextReader instance. Aspect-orientated approach Use AOP to perform like the event-based wrapper approach. Advantages Can be reused for other types of tracking. Tokenization doesn't know about Location tracking or other tracking. No need to rewrite the TextReader methods. Disadvantages Requires external dependencies, which I want to avoid. I am looking for the best approach in my situation. I would like to: Not bloat the tokenizer methods with location tracking. Not require heavy initialization in user code. Not have any/much boilerplate/duplicated code. (Perhaps) not couple the TextReader with the Location class. Any insight into this problem and possible solutions or adjustments are welcome. Thanks! (For those who want a specific question: What is the best way to wrap the functionality of a TextReader?) I have implemented the "Plain TextReader wrapper" and "Event-based TextReader wrapper" approaches and am displeased with both, for reasons mentioned in their disadvantages.

    Read the article

  • Flex SDK missing fundamental things

    - by Bart van Heukelom
    All of a sudden Flash Builder 4 is missing all kinds of fundamental things and is generating incorrect errors. I've had the same issue yesterday, where I fixed it by downloading a new Flex SDK and importing that into FB. I did this again, but this time it fixed nothing. I don't think it's something I did, like removing critical references from the build path. The errors also appeared on projects I was not working on at the time. It occurs for ActionScript, Flex and Flex Library projects alike. Update 3: Well, i've singled the problem down to a single piece of code, though a very simple one. I can make a new workspace in FB and things work ok, then screw the workspace up forever by adding this code to a project. All projects will have errors and closing or even removing the faulty project does not change this. Making another new workspace (without the faulty code) makes my projects compile again. Link: http://www.the3rdage.net/files/2745/Main.as (i've uploaded the file in case an odd character or encoding error causes the error) Update 2: I've tried manual compiling with mxmlc, the same errors occur. It appears to be an SDK problem, not Flash Builder. Update: I find this stack trace in the Flash Builder error log: !ENTRY com.adobe.flexbuilder.project 4 43 2010-05-11 11:55:47.495 !MESSAGE Uncaught exception in compiler !STACK 0 java.lang.NullPointerException at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2592) at macromedia.asc.parser.VariableBindingNode.evaluate(VariableBindingNode.java:64) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2233) at macromedia.asc.parser.ListNode.evaluate(ListNode.java:44) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2578) at macromedia.asc.parser.VariableDefinitionNode.evaluate(VariableDefinitionNode.java:48) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2310) at macromedia.asc.parser.StatementListNode.evaluate(StatementListNode.java:60) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2503) at macromedia.asc.parser.WithStatementNode.evaluate(WithStatementNode.java:44) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2310) at macromedia.asc.parser.StatementListNode.evaluate(StatementListNode.java:60) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2891) at macromedia.asc.parser.FunctionCommonNode.evaluate(FunctionCommonNode.java:106) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:2905) at macromedia.asc.parser.FunctionCommonNode.evaluate(FunctionCommonNode.java:106) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:3643) at macromedia.asc.parser.ClassDefinitionNode.evaluate(ClassDefinitionNode.java:106) at macromedia.asc.semantics.ConstantEvaluator.evaluate(ConstantEvaluator.java:3371) at macromedia.asc.parser.ProgramNode.evaluate(ProgramNode.java:80) at flex2.compiler.as3.As3Compiler.analyze4(As3Compiler.java:709) at flex2.compiler.CompilerAPI.analyze(CompilerAPI.java:3089) at flex2.compiler.CompilerAPI.analyze(CompilerAPI.java:2977) at flex2.compiler.CompilerAPI.batch2(CompilerAPI.java:528) at flex2.compiler.CompilerAPI.batch(CompilerAPI.java:1274) at flex2.compiler.CompilerAPI.compile(CompilerAPI.java:1496) at flex2.tools.oem.Application.compile(Application.java:1188) at flex2.tools.oem.Application.recompile(Application.java:1133) at flex2.tools.oem.Application.compile(Application.java:819) at flex2.tools.flexbuilder.BuilderApplication.compile(BuilderApplication.java:344) at com.adobe.flexbuilder.multisdk.compiler.internal.ASApplicationBuilder$MyBuilder.mybuild(ASApplicationBuilder.java:276) at com.adobe.flexbuilder.multisdk.compiler.internal.ASApplicationBuilder.build(ASApplicationBuilder.java:127) at com.adobe.flexbuilder.multisdk.compiler.internal.ASBuilder.build(ASBuilder.java:190) at com.adobe.flexbuilder.multisdk.compiler.internal.ASItemBuilder.build(ASItemBuilder.java:74) at com.adobe.flexbuilder.project.compiler.internal.FlexProjectBuilder.buildItem(FlexProjectBuilder.java:480) at com.adobe.flexbuilder.project.compiler.internal.FlexProjectBuilder.build(FlexProjectBuilder.java:306) at com.adobe.flexbuilder.project.compiler.internal.FlexIncrementalBuilder.build(FlexIncrementalBuilder.java:157) at org.eclipse.core.internal.events.BuildManager$2.run(BuildManager.java:627) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:170) at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:201) at org.eclipse.core.internal.events.BuildManager$1.run(BuildManager.java:253) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:256) at org.eclipse.core.internal.events.BuildManager.basicBuildLoop(BuildManager.java:309) at org.eclipse.core.internal.events.BuildManager.build(BuildManager.java:341) at org.eclipse.core.internal.events.AutoBuildJob.doBuild(AutoBuildJob.java:140) at org.eclipse.core.internal.events.AutoBuildJob.run(AutoBuildJob.java:238) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)

    Read the article

  • TripleDES in Perl/PHP/ColdFusion

    - by Seidr
    Recently a problem arose regarding hooking up an API with a payment processor who were requesting a string to be encrypted to be used as a token, using the TripleDES standard. Our Applications run using ColdFusion, which has an Encrypt tag - that supports TripleDES - however the result we were getting back was not what the payment processor expected. First of all, here is the resulting token the payment processor were expecting. AYOF+kRtg239Mnyc8QIarw== And below is the snippet of ColdFusion we were using, and the resulting string. <!--- Coldfusion Crypt (here be monsters) ---> <cfset theKey="123412341234123412341234"> <cfset theString = "username=test123"> <cfset strEncodedEnc = Encrypt(theString, theKey, "DESEDE", "Base64")> <!--- resulting string(strEncodedEnc): tc/Jb7E9w+HpU2Yvn5dA7ILGmyNTQM0h ---> As you can see, this was not returning the string we were hoping for. Seeking a solution, we ditched ColdFusion for this process and attempted to reproduce the token in PHP. Now I'm aware that various languages implement encryption in different ways - for example in the past managing encryption between a C# application and PHP back-end, I've had to play about with padding in order to get the two to talk, but my experience has been that PHP generally behaves when it comes to encryption standards. Anyway, on to the PHP source we tried, and the resulting string. /* PHP Circus (here be Elephants) */ $theKey="123412341234123412341234"; $theString="username=test123"; $strEncodedEnc=base64_encode(mcrypt_ecb (MCRYPT_3DES, $theKey, $theString, MCRYPT_ENCRYPT)); /* resulting string(strEncodedEnc): sfiSu4mVggia8Ysw98x0uw== */ As you can plainly see, we've got another string that differs from both the string expected by the payment processor AND the one produced by ColdFusion. Cue head-against-wall integration techniques. After many to-and-fro communications with the payment processor (lots and lots of reps stating 'we can't help with coding issues, you must be doing it incorrectly, read the manual') we were finally escalated to someone with more than a couple of brain-cells to rub together, who was able to step back and actually look at and diagnose the issue. He agreed, our CF and PHP attempts were not resulting in the correct string. After a quick search, he also agreed that it was not neccesarily our source, but rather how the two languages implemented their vision of the TripleDES standard. Coming into the office this morning, we were met by an email with a snippet of source code, in Perl. This is was the code they were directly using on their end to produce the expected token. #!/usr/bin/perl # Perl Crypt Calamity (here be...something) use strict; use CGI; use MIME::Base64; use Crypt::TripleDES; my $cgi = CGI->new(); my $param = $cgi->Vars(); $param->{key} = "123412341234123412341234"; $param->{string} = "username=test123"; my $des = Crypt::TripleDES->new(); my $enc = $des->encrypt3($param->{string}, $param->{key}); $enc = encode_base64($enc); $enc =~ s/\n//gs; # resulting string (enc): AYOF+kRtg239Mnyc8QIarw== So, there we have it. Three languages, three implementations of what they quote in the documentation as TripleDES Standard Encryption, and three totally different resulting strings. My question is, from your experience of these three languages and their implementations of the TripleDES algorithm, have you been able to get any two of them to give the same response, and if so what tweaks to the code did you have to make in order to come to the result? I understand this is a very drawn out question, but I wanted to give clear and precise setting for each stage of testing that we had to perform. I'll also be performing some more investigatory work on this subject later, and will post any findings that I come up with to this question, so that others may avoid this headache.

    Read the article

  • Why are there 3 conflicting OpenCV camera calibration formulas?

    - by John
    I'm having a problem with OpenCV's various parameterization of coordinates used for camera calibration purposes. The problem is that three different sources of information on image distortion formulae apparently give three non-equivalent description of the parameters and equations involved: (1) In their book "Learning OpenCV…" Bradski and Kaehler write regarding lens distortion (page 376): xcorrected = x * ( 1 + k1 * r^2 + k2 * r^4 + k3 * r^6 ) + [ 2 * p1 * x * y + p2 * ( r^2 + 2 * x^2 ) ], ycorrected = y * ( 1 + k1 * r^2 + k2 * r^4 + k3 * r^6 ) + [ p1 * ( r^2 + 2 * y^2 ) + 2 * p2 * x * y ], where r = sqrt( x^2 + y^2 ). Assumably, (x, y) are the coordinates of pixels in the uncorrected captured image corresponding to world-point objects with coordinates (X, Y, Z), camera-frame referenced, for which xcorrected = fx * ( X / Z ) + cx and ycorrected = fy * ( Y / Z ) + cy, where fx, fy, cx, and cy, are the camera's intrinsic parameters. So, having (x, y) from a captured image, we can obtain the desired coordinates ( xcorrected, ycorrected ) to produced an undistorted image of the captured world scene by applying the above first two correction expressions. However... (2) The complication arises as we look at OpenCV 2.0 C Reference entry under the Camera Calibration and 3D Reconstruction section. For ease of comparison we start with all world-point (X, Y, Z) coordinates being expressed with respect to the camera's reference frame, just as in #1. Consequently, the transformation matrix [ R | t ] is of no concern. In the C reference, it is expressed that: x' = X / Z, y' = Y / Z, x'' = x' * ( 1 + k1 * r'^2 + k2 * r'^4 + k3 * r'^6 ) + [ 2 * p1 * x' * y' + p2 * ( r'^2 + 2 * x'^2 ) ], y'' = y' * ( 1 + k1 * r'^2 + k2 * r'^4 + k3 * r'^6 ) + [ p1 * ( r'^2 + 2 * y'^2 ) + 2 * p2 * x' * y' ], where r' = sqrt( x'^2 + y'^2 ), and finally that u = fx * x'' + cx, v = fy * y'' + cy. As one can see these expressions are not equivalent to those presented in #1, with the result that the two sets of corrected coordinates ( xcorrected, ycorrected ) and ( u, v ) are not the same. Why the contradiction? It seems to me the first set makes more sense as I can attach physical meaning to each and every x and y in there, while I find no physical meaning in x' = X / Z and y' = Y / Z when the camera focal length is not exactly 1. Furthermore, one cannot compute x' and y' for we don't know (X, Y, Z). (3) Unfortunately, things get even murkier when we refer to the writings in Intel's Open Source Computer Vision Library Reference Manual's section Lens Distortion (page 6-4), which states in part: "Let ( u, v ) be true pixel image coordinates, that is, coordinates with ideal projection, and ( u ~, v ~ ) be corresponding real observed (distorted) image coordinates. Similarly, ( x, y ) are ideal (distortion-free) and ( x ~, y ~ ) are real (distorted) image physical coordinates. Taking into account two expansion terms gives the following: x ~ = x * ( 1 + k1 * r^2 + k2 * r^4 ) + [ 2 p1 * x * y + p2 * ( r^2 + 2 * x^2 ) ] y ~ = y * ( 1 + k1 * r^2 + k2 * r^4 ] + [ 2 p2 * x * y + p2 * ( r^2 + 2 * y^2 ) ], where r = sqrt( x^2 + y^2 ). ... "Because u ~ = cx + fx * u and v ~ = cy + fy * v , … the resultant system can be rewritten as follows: u ~ = u + ( u – cx ) * [ k1 * r^2 + k2 * r^4 + 2 * p1 * y + p2 * ( r^2 / x + 2 * x ) ] v ~ = v + ( v – cy ) * [ k1 * r^2 + k2 * r^4 + 2 * p2 * x + p1 * ( r^2 / y + 2 * y ) ] The latter relations are used to undistort images from the camera." Well, it would appear that the expressions involving x ~ and y ~ coincided with the two expressions given at the top of this writing involving xcorrected and ycorrected. However, x ~ and y ~ do not refer to corrected coordinates, according to the given description. I don't understand the distinction between the meaning of the coordinates ( x ~, y ~ ) and ( u ~, v ~ ), or for that matter, between the pairs ( x, y ) and ( u, v ). From their descriptions it appears their only distinction is that ( x ~, y ~ ) and ( x, y ) refer to 'physical' coordinates while ( u ~, v ~ ) and ( u, v ) do not. What is this distinction all about? Aren't they all physical coordinates? I'm lost! Thanks for any input!

    Read the article

  • AES Cipher not picking up IV

    - by timothyjc
    I am trying to use an IV with AES so that the encrypted text is unpredictable. However, the encrypted hex string is always the same. I have actually tried a few methods of attempting to add some randomness by passing some additional parameters to the cipher init call: 1) Manual IV generation byte[] iv = generateIv(); IvParameterSpec ivspec = new IvParameterSpec(iv); 2) Asking cipher to generate IV AlgorithmParameters params = cipher.getParameters(); params.getParameterSpec(IvParameterSpec.class); 3) Using a PBEParameterSpec byte[] encryptionSalt = generateSalt(); PBEParameterSpec pbeParamSpec = new PBEParameterSpec(encryptionSalt, 1000); All of these seem to have no influence on the encrypted text.... help!!! My code: package com.citc.testencryption; import java.security.NoSuchAlgorithmException; import java.security.SecureRandom; import javax.crypto.Cipher; import javax.crypto.SecretKey; import javax.crypto.SecretKeyFactory; import javax.crypto.spec.IvParameterSpec; import javax.crypto.spec.PBEKeySpec; import android.app.Activity; import android.os.Bundle; import android.util.Log; public class Main extends Activity { public static final int SALT_LENGTH = 20; public static final int PBE_ITERATION_COUNT = 1000; private static final String RANDOM_ALGORITHM = "SHA1PRNG"; private static final String PBE_ALGORITHM = "PBEWithSHA256And256BitAES-CBC-BC"; private static final String CIPHER_ALGORITHM = "PBEWithSHA256And256BitAES-CBC-BC"; private static final String TAG = Main.class.getSimpleName(); @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); try { String password = "password"; String plainText = "plaintext message to be encrypted"; // byte[] salt = generateSalt(); byte[] salt = "dfghjklpoiuytgftgyhj".getBytes(); Log.i(TAG, "Salt: " + salt.length + " " + HexEncoder.toHex(salt)); PBEKeySpec pbeKeySpec = new PBEKeySpec(password.toCharArray(), salt, PBE_ITERATION_COUNT); SecretKeyFactory keyFac = SecretKeyFactory.getInstance(PBE_ALGORITHM); SecretKey secretKey = keyFac.generateSecret(pbeKeySpec); byte[] key = secretKey.getEncoded(); Log.i(TAG, "Key: " + HexEncoder.toHex(key)); // PBEParameterSpec pbeParamSpec = new PBEParameterSpec(salt, ITERATION_COUNT); Cipher encryptionCipher = Cipher.getInstance(CIPHER_ALGORITHM); // byte[] encryptionSalt = generateSalt(); // Log.i(TAG, "Encrypted Salt: " + encryptionSalt.length + " " + HexEncoder.toHex(encryptionSalt)); // PBEParameterSpec pbeParamSpec = new PBEParameterSpec(encryptionSalt, 1000); // byte[] iv = params.getParameterSpec(IvParameterSpec.class).getIV(); // Log.i(TAG, encryptionCipher.getParameters() + " "); byte[] iv = generateIv(); IvParameterSpec ivspec = new IvParameterSpec(iv); encryptionCipher.init(Cipher.ENCRYPT_MODE, secretKey, ivspec); byte[] encryptedText = encryptionCipher.doFinal(plainText.getBytes()); Log.i(TAG, "Encrypted: " + HexEncoder.toHex(encryptedText)); // <== Why is this always the same :( Cipher decryptionCipher = Cipher.getInstance(CIPHER_ALGORITHM); decryptionCipher.init(Cipher.DECRYPT_MODE, secretKey, ivspec); byte[] decryptedText = decryptionCipher.doFinal(encryptedText); Log.i(TAG, "Decrypted: " + new String(decryptedText)); } catch (Exception e) { e.printStackTrace(); } } private byte[] generateSalt() throws NoSuchAlgorithmException { SecureRandom random = SecureRandom.getInstance(RANDOM_ALGORITHM); byte[] salt = new byte[SALT_LENGTH]; random.nextBytes(salt); return salt; } private byte[] generateIv() throws NoSuchAlgorithmException { SecureRandom random = SecureRandom.getInstance(RANDOM_ALGORITHM); byte[] iv = new byte[16]; random.nextBytes(iv); return iv; } }

    Read the article

  • is it right to call ejb bean from thread by ThreadPoolExecutor?

    - by kislo_metal
    I trying to call some ejb bean method from tread. and getting error : (as is glassfish v3) Log Level SEVERE Logger javax.enterprise.system.std.com.sun.enterprise.v3.services.impl Name-Value Pairs {_ThreadName=Thread-1, _ThreadID=42} Record Number 928 Message ID java.lang.NullPointerException at ua.co.rufous.server.broker.TempLicService.run(TempLicService.java Complete Message 35) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:637) here is tread public class TempLicService implements Runnable { String hash; //it`s Stateful bean @EJB private LicActivatorLocal lActivator; public TempLicService(String hash) { this.hash= hash; } @Override public void run() { lActivator.proccessActivation(hash); } } my ThreadPoolExecutor public class RequestThreadPoolExecutor extends ThreadPoolExecutor { private boolean isPaused; private ReentrantLock pauseLock = new ReentrantLock(); private Condition unpaused = pauseLock.newCondition(); private static RequestThreadPoolExecutor threadPool; private RequestThreadPoolExecutor() { super(1, Integer.MAX_VALUE, 10, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>()); System.out.println("RequestThreadPoolExecutor created"); } public static RequestThreadPoolExecutor getInstance() { if (threadPool == null) threadPool = new RequestThreadPoolExecutor(); return threadPool; } public void runService(Runnable task) { threadPool.execute(task); } protected void beforeExecute(Thread t, Runnable r) { super.beforeExecute(t, r); pauseLock.lock(); try { while (isPaused) unpaused.await(); } catch (InterruptedException ie) { t.interrupt(); } finally { pauseLock.unlock(); } } public void pause() { pauseLock.lock(); try { isPaused = true; } finally { pauseLock.unlock(); } } public void resume() { pauseLock.lock(); try { isPaused = false; unpaused.signalAll(); } finally { pauseLock.unlock(); } } public void shutDown() { threadPool.shutdown(); } //<<<<<< creating thread here public void runByHash(String hash) { Runnable service = new TempLicService(hash); threadPool.runService(service); } } and method where i call it (it is gwt servlet, but there is no proble to call thread that not contain ejb) : @Override public Boolean submitHash(String hash) { System.out.println("submiting hash"); try { if (tBoxService.getTempLicStatus(hash) == 1) { //<<< here is the call RequestThreadPoolExecutor.getInstance().runByHash(hash); return true; } } catch (NoResultException e) { e.printStackTrace(); } return false; } I need to organize some pool of submitting hash to server (calls of LicActivator bean), is ThreadPoolExecutor design good idea and why it is not working in my case? (as I know we can`t create thread inside bean, but could we call bean from different threads? ). If No, what is the bast practice for organize such request pool? Thanks. << Answer: I am using DI (EJB 3.1) soo i do not need any look up here. (application packed in ear and both modules in it (web module and ejb), it works perfect for me). But I can use it only in managed classes. So.. 2.Can I use manual look up in Tread ? Could I use Bean that extends ThreadPoolExecutor and calling another bean that implements Runnable ? Or it is not allowed ?

    Read the article

  • How do I get confidence intervals without inverting a singular Hessian matrix?

    - by AmalieNot
    Hello. I recently posted this to reddit and it was suggested I come here, so here I am. I'm a student working on an epidemiology model in R, using maximum likelihood methods. I created my negative log likelihood function. It's sort of gross looking, but here it is: NLLdiff = function(v1, CV1, v2, CV2, st1 = (czI01 - czV01), st2 = (czI02 - czV02), st01 = czI01, st02 = czI02, tt1 = czT01, tt2 = czT02) { prob1 = (1 + v1 * CV1 * tt1)^(-1/CV1) prob2 = ( 1 + v2 * CV2 * tt2)^(-1/CV2) -(sum(dbinom(st1, st01, prob1, log = T)) + sum(dbinom(st2, st02, prob2, log = T))) } The reason the first line looks so awful is because most of the data it takes is inputted there. czI01, for example, is already declared. I did this simply so that my later calls to the function don't all have to have awful vectors in them. I then optimized for CV1, CV2, v1 and v2 using mle2 (library bbmle). That's also a bit gross looking, and looks like: ml.cz.diff = mle2 (NLLdiff, start=list(v1 = vguess, CV1 = cguess, v2 = vguess, CV2 = cguess), method="L-BFGS-B", lower = 0.0001) Now, everything works fine up until here. ml.cz.diff gives me values that I can turn into a plot that reasonably fits my data. I also have several different models, and can get AICc values to compare them. However, when I try to get confidence intervals around v1, CV1, v2 and CV2 I have problems. Basically, I get a negative bound on CV1, which is impossible as it actually represents a square number in the biological model as well as some warnings. The warnings are this: http://i.imgur.com/B3H2l.png . Is there a better way to get confidence intervals? Or, really, a way to get confidence intervals that make sense here? What I see happening is that, by coincidence, my hessian matrix is singular for some values in the optimization space. But, since I'm optimizing over 4 variables and don't have overly extensive programming knowledge, I can't come up with a good method of optimization that doesn't rely on the hessian. I have googled the problem - it suggested that my model's bad, but I'm reconstructing some work done before which suggests that my model's really not awful (the plots I make using the ml.cz.diff look like the plots of the original work). I have also read the relevant parts of the manual as well as Bolker's book Ecological Models in R. I have also tried different optimization methods, which resulted in a longer run time but the same errors. The "SANN" method didn't finish running within an hour, so I didn't wait around to see the result. tl;dr : my confidence intervals are bad, is there a relatively straightforward way to fix them in R. My vectors are: czT01 = c(5, 5, 5, 5, 5, 5, 5, 25, 25, 25, 25, 25, 25, 25, 50, 50, 50, 50, 50, 50, 50) czT02 = c(5, 5, 5, 5, 5, 10, 10, 10, 10, 10, 25, 25, 25, 25, 25, 50, 50, 50, 50, 50, 75, 75, 75, 75, 75) czI01 = c(25, 24, 22, 22, 26, 23, 25, 25, 25, 23, 25, 18, 21, 24, 22, 23, 25, 23, 25, 25, 25) czI02 = c(13, 16, 5, 18, 16, 13, 17, 22, 13, 15, 15, 22, 12, 12, 13, 13, 11, 19, 21, 13, 21, 18, 16, 15, 11) czV01 = c(1, 4, 5, 5, 2, 3, 4, 11, 8, 1, 11, 12, 10, 16, 5, 15, 18, 12, 23, 13, 22) czV02 = c(0, 3, 1, 5, 1, 6, 3, 4, 7, 12, 2, 8, 8, 5, 3, 6, 4, 6, 11, 5, 11, 1, 13, 9, 7) and I get my guesses by: v = -log((c(czI01, czI02) - c(czV01, czV02))/c(czI01, czI02))/c(czT01, czT02) vguess = mean(v) cguess = var(v)/vguess^2 It's also possible that I'm doing something else completely wrong, but my results seem reasonable so I haven't caught it.

    Read the article

  • Use QuickCheck by generating primes

    - by Dan
    Background For fun, I'm trying to write a property for quick-check that can test the basic idea behind cryptography with RSA. Choose two distinct primes, p and q. Let N = p*q e is some number relatively prime to (p-1)(q-1) (in practice, e is usually 3 for fast encoding) d is the modular inverse of e modulo (p-1)(q-1) For all x such that 1 < x < N, it is always true that (x^e)^d = x modulo N In other words, x is the "message", raising it to the eth power mod N is the act of "encoding" the message, and raising the encoded message to the dth power mod N is the act of "decoding" it. (The property is also trivially true for x = 1, a case which is its own encryption) Code Here are the methods I have coded up so far: import Test.QuickCheck -- modular exponentiation modExp :: Integral a => a -> a -> a -> a modExp y z n = modExp' (y `mod` n) z `mod` n where modExp' y z | z == 0 = 1 | even z = modExp (y*y) (z `div` 2) n | odd z = (modExp (y*y) (z `div` 2) n) * y -- relatively prime rPrime :: Integral a => a -> a -> Bool rPrime a b = gcd a b == 1 -- multiplicative inverse (modular) mInverse :: Integral a => a -> a -> a mInverse 1 _ = 1 mInverse x y = (n * y + 1) `div` x where n = x - mInverse (y `mod` x) x -- just a quick way to test for primality n `divides` x = x `mod` n == 0 primes = 2:filter isPrime [3..] isPrime x = null . filter (`divides` x) $ takeWhile (\y -> y*y <= x) primes -- the property prop_rsa (p,q,x) = isPrime p && isPrime q && p /= q && x > 1 && x < n && rPrime e t ==> x == (x `powModN` e) `powModN` d where e = 3 n = p*q t = (p-1)*(q-1) d = mInverse e t a `powModN` b = modExp a b n (Thanks, google and random blog, for the implementation of modular multiplicative inverse) Question The problem should be obvious: there are way too many conditions on the property to make it at all usable. Trying to invoke quickCheck prop_rsa in ghci made my terminal hang. So I've poked around the QuickCheck manual a bit, and it says: Properties may take the form forAll <generator> $ \<pattern> -> <property> How do I make a <generator> for prime numbers? Or with the other constraints, so that quickCheck doesn't have to sift through a bunch of failed conditions? Any other general advice (especially regarding QuickCheck) is welcome.

    Read the article

  • Java Applet Deployment, ClassNotFoundException (primary class)

    - by Matt
    This is driving me up the wall. I have checked and rechecked spelling and paths. I have tried just about every combination of paths, including relative, absolute, and full http paths. I continue to get the following error when trying to load a Java applet: java.lang.ClassNotFoundException: AppletClient.class at sun.plugin2.applet.Applet2ClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.plugin2.applet.Plugin2ClassLoader.loadCode(Unknown Source) at sun.plugin2.applet.Plugin2Manager.createApplet(Unknown Source) at sun.plugin2.applet.Plugin2Manager$AppletExecutionRunnable.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Exception: java.lang.ClassNotFoundException: AppletClient.class The HTML used to load the applet: <applet width="100" height="100" archive="applet/myapplet.jar, applet/applet_dependency.jar" code="AppletClient.class"> <param value="blahblah" name="username"> <param value="false" name="codebase_lookup"> </applet> The applet is in a relative directory, "applet", from the path of the current page. I have unzipped the jar file and can see AppletClient.class. Also, in the source of the project, it is spelled that way (casing and all). I have tried with/without the parameters. I have changed the names of the archive jars in the applet include tag just to see if I get a different error for bad file names (same error). I have manually done GETs on the jars to make sure the server is responding to the requests (it is). I have tried with and without the codebase tag, with all different varieties of paths (start getting bad "magic number" errors on those). I know that this error sometimes pops up when a dependency fails to load, so it can be misleading, but all dependencies are present, accounted for, and are fetchable via manual GETs. Between each and every attempt I always clear my cache in FireFox. These problems are reproduced in IE8 and Chrome as well. Per my Java Console from the browser, I am running Java Plug-in 1.6.0_20. This is from the same machine that I develop the applet on, which runs fine via Eclipse. Finally, I kicked on Fiddler2, and I don't see a single request for the jar files anywhere The host site is running from my Visual Studio debugger, so it's running on localhost. But I see the requests for all the other resources on Fiddler. Just... no Jars. ANYWHERE. I clear the log, cleared my browser cache, and did a ctrl-R refresh. And still, not a single Jar request on the Fiddler log. I even did a delayed write (with JS) of the applet tag after the page loaded, once all the Fiddler activity slowed down. The element gets written to the document (and I can see the 100x100 Java error window), but not a single request shows up on Fiddler. Any suggestions, before I go crawl into the corner and cry myself to sleep? EDIT: From the Java console, if I hit "l" (el) to "dump classloader list", I see something that looks like this: Live entry: key=http://localhost:55446/BaseWebSite/,http://localhost:55446/BaseWebSite/applet/myappliet.jar, http://localhost:55446/BaseWebSite/applet/applet_dependency.jar, refCount=1, threadGroup=sun.plugin2.applet.Applet2ThreadGroup[name=http://localhost:55446/BaseWebSite/-threadGroup,maxpri=4]

    Read the article

  • Pulling record from mySQL database only working for userid and not email

    - by user2908467
    This function works because I search by userid: private void showList_Click(object sender, EventArgs e) { int id = 0; for (int i = 0; i <= sqlClient.Count("UserList"); i++) { Dictionary<string, string> dik = sqlClient.Select("UserList", "userid = " + id); var lines = dik.Select(kv => kv.Key + ": " + kv.Value.ToString()); userList.AppendText(string.Join(Environment.NewLine, lines)); userList.AppendText(Environment.NewLine); userList.AppendText("--------------------------------------"); id++; } } This function does not work because I search by email: private void login_Click(object sender, EventArgs e) { string email = lemail.Text; Dictionary<string, string> dik = sqlClient.Select("UserList", "firstname = " + email); var lines = dik.Select(kv => kv.Key + ": " + kv.Value.ToString()); logged.AppendText(string.Join(Environment.NewLine, lines)); } This is the error message I receive when I click on the login button: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '@aol.com' at line 1 The email I searched for in the database was "[email protected]" without quotes. I'm lead to believe by the error message the @ sign is causing conflict as I know it is a special character but I am having a hard time figuring out what phrase to search for to help me. Also, here is the function that is being called: public Dictionary<string, string> Select(string table, string WHERE) { //This methods selects from the database, it retrieves data from it. //You must make a dictionary to use this since it both saves the column //and the value. i.e. "age" and "33" so you can easily search for values. //Example: SELECT * FROM names WHERE name='John Smith' // This example would retrieve all data about the entry with the name "John Smith" //Code = Dictionary<string, string> myDictionary = Select("names", "name='John Smith'"); //This code creates a dictionary and fills it with info from the database. string query = "SELECT * FROM " + table + " WHERE " + WHERE + ""; Dictionary<string, string> selectResult = new Dictionary<string, string>(); if (this.Open()) { MySqlCommand cmd = new MySqlCommand(query, conn); MySqlDataReader dataReader = cmd.ExecuteReader(); try { while (dataReader.Read()) { for (int i = 0; i < dataReader.FieldCount; i++) { selectResult.Add(dataReader.GetName(i).ToString(), dataReader.GetValue(i).ToString()); } } dataReader.Close(); } catch { } this.Close(); return selectResult; } else { return selectResult; } } My database table is called "UserList" The fields in order are as follows: userid, email, password, lastname, firstname Any help would be greatly appreciated. This site is amazing!

    Read the article

  • How to copy this portion of a text file out and put into a hash using rails? (VATsim datafile)

    - by Rusty Broderick
    Hi I'm trying to work out how i can cut out the section between !CLIENTS and the '; ;' and then to parse it into a hash in order to make an xml file. Honestly have no idea how to do it. The file is as follows: vatsim-data.txt original file here ; Created at 30/12/2010 01:29:14 UTC by Data Server V4.0 ; ; Data is the property of VATSIM.net and is not to be used for commercial purposes without the express written permission of the VATSIM.net Founders or their designated agent(s ). ; ; Sections are: ; !GENERAL contains general settings ; !CLIENTS contains informations about all connected clients ; !PREFILE contains informations about all prefiled flight plans ; !SERVERS contains a list of all FSD running servers to which clients can connect ; !VOICE SERVERS contains a list of all running voice servers that clients can use ; ; Data formats of various sections are: ; !GENERAL section - VERSION is this data format version ; RELOAD is time in minutes this file will be updated ; UPDATE is the last date and time this file has been updated. Format is yyyymmddhhnnss ; ATIS ALLOW MIN is time in minutes to wait before allowing manual Atis refresh by way of web page interface ; CONNECTED CLIENTS is the number of clients currently connected ; !CLIENTS section - callsign:cid:realname:clienttype:frequency:latitude:longitude:altitude:groundspeed:planned_aircraft:planned_tascruise:planned_depairport:planned_altitude:planned_destairport:server:protrevision:rating:transponder:facilitytype:visualrange:planned_revision:planned_flighttype:planned_deptime:planned_actdeptime:planned_hrsenroute:planned_minenroute:planned_hrsfuel:planned_minfuel:planned_altairport:planned_remarks:planned_route:planned_depairport_lat:planned_depairport_lon:planned_destairport_lat:planned_destairport_lon:atis_message:time_last_atis_received:time_logon:heading:QNH_iHg:QNH_Mb: ; !PREFILE section - callsign:cid:realname:clienttype:frequency:latitude:longitude:altitude:groundspeed:planned_aircraft:planned_tascruise:planned_depairport:planned_altitude:planned_destairport:server:protrevision:rating:transponder:facilitytype:visualrange:planned_revision:planned_flighttype:planned_deptime:planned_actdeptime:planned_hrsenroute:planned_minenroute:planned_hrsfuel:planned_minfuel:planned_altairport:planned_remarks:planned_route:planned_depairport_lat:planned_depairport_lon:planned_destairport_lat:planned_destairport_lon:atis_message:time_last_atis_received:time_logon:heading:QNH_iHg:QNH_Mb: ; !SERVERS section - ident:hostname_or_IP:location:name:clients_connection_allowed: ; !VOICE SERVERS section - hostname_or_IP:location:name:clients_connection_allowed:type_of_voice_server: ; ; Field separator is : character ; ; !GENERAL: VERSION = 8 RELOAD = 2 UPDATE = 20101230012914 ATIS ALLOW MIN = 5 CONNECTED CLIENTS = 515 ; ; !VOICE SERVERS: voice2.vacc-sag.org:Nurnberg:Europe-CW:1:R: voice.vatsim.fi:Finland - Sponsored by Verkkokauppa.com and NBL Solutions:Finland:1:R: rw.liveatc.net:USA, California:Liveatc:1:R: rw1.vatpac.org:Melbourne, Australia:Oceania:1:R: spain.vatsim.net:Spain:Vatsim Spain Server:1:R: voice.nyartcc.org:Sponsored by NY ARTCC:NY-ARTCC:1:R: voice.zhuartcc.net:Sponsored by Houston ARTCC:ZHU-ARTCC:1:R: ; ; !CLIENTS: 01PD:1090811:prentis gibbs KJFK:PILOT::40.64841:-73.81030:15:0::0::::USA-E:100:1:1200::::::::::::::::::::20101230010851:28:30.1:1019: 4X-BRH:1074589:george sandoval LLJR:PILOT::50.05618:-125.84429:10819:206:C337/G:150:CYAL:FL120:CCI9:EUROPE-C2:100:1:6043:::2:I:110:110:1:26:2:59:: /T/:DCT:0:0:0:0:::20101230005323:129:29.76:1007: 50125:1109107:Dave Frew KEDU:PILOT::46.52736:-121.95317:23877:471:B/B744/F:530:KTCM:30000:KLSV:USA-E:100:1:7723:::1:I:0:116:0:0:0:0:::GPS DIRECT.:0:0:0:0:::20101230012346:164:29.769:1008: 85013:1126003:Dmitry Abramov UWWW:PILOT::76.53819:71.54782:33444:423:T/ZZZZ/G:500:UUDD:FL330:ULAA:EUROPE-C2:100:1:2200:::2:I:0:2139:0:0:0:0:ULLI::BITSA DCT WM/N0485S1010 DCT KS DCT NE R22 ULWW B153 LAPEK B210 SU G476 OLATA:0:0:0:0:::20101229215815:62:53.264:1803: ; ; !SERVERS: EUROPE-C2:88.198.19.202:Europe:Center Europe Server Two:1: ; ; END I want to format the html with the tags with client being the parent and the nested tags as follows: callsign:cid:realname:clienttype:frequency:latitude:longitude:altitude:groundspeed:planned_aircraft:planned_tascruise:planned_depairport:planned_altitude:planned_destairport:server:protrevision:rating:transponder:facilitytype:visualrange:planned_revision:planned_flighttype:planned_deptime:planned_actdeptime:planned_hrsenroute:planned_minenroute:planned_hrsfuel:planned_minfuel:planned_altairport:planned_remarks:planned_route:planned_depairport_lat:planned_depairport_lon:planned_destairport_lat:planned_destairport_lon:atis_message:time_last_atis_received:time_logon:heading:QNH_iHg:QNH_Mb: Any help in solving this would be much appreciated!

    Read the article

  • How do I get jqGrid to work using ASP.NET + JSON on the backend?

    - by briandus
    Hi friends, ok, I'm back. I totally simplified my problem to just three simple fields and I'm still stuck on the same line using the addJSONData method. I've been stuck on this for days and no matter how I rework the ajax call, the json string, blah blah blah...I can NOT get this to work! I can't even get it to work as a function when adding one row of data manually. Can anyone PLEASE post a working sample of jqGrid that works with ASP.NET and JSON? Would you please include 2-3 fields (string, integer and date preferably?) I would be happy to see a working sample of jqGrid and just the manual addition of a JSON object using the addJSONData method. Thanks SO MUCH!! If I ever get this working, I will post a full code sample for all the other posting for help from ASP.NET, JSON users stuck on this as well. Again. THANKS!! tbl.addJSONData(objGridData); //err: tbl.addJSONData is not a function!! Here is what Firebug is showing when I receive this message: • objGridData Object total=1 page=1 records=5 rows=[5] ? Page "1" Records "5" Total "1" Rows [Object ID=1 PartnerID=BCN, Object ID=2 PartnerID=BCN, Object ID=3 PartnerID=BCN, 2 more... 0=Object 1=Object 2=Object 3=Object 4=Object] (index) 0 (prop) ID (value) 1 (prop) PartnerID (value) "BCN" (prop) DateTimeInserted (value) Thu May 29 2008 12:08:45 GMT-0700 (Pacific Daylight Time) * There are three more rows Here is the value of the variable tbl (value) 'Table.scroll' <TABLE cellspacing="0" cellpadding="0" border="0" style="width: 245px;" class="scroll grid_htable"><THEAD><TR><TH class="grid_sort grid_resize" style="width: 55px;"><SPAN> </SPAN><DIV id="jqgh_ID" style="cursor: pointer;">ID <IMG src="http://localhost/DNN5/js/jQuery/jqGrid-3.4.3/themes/sand/images/sort_desc.gif"/></DIV></TH><TH class="grid_resize" style="width: 90px;"><SPAN> </SPAN><DIV id="jqgh_PartnerID" style="cursor: pointer;">PartnerID </DIV></TH><TH class="grid_resize" style="width: 100px;"><SPAN> </SPAN><DIV id="jqgh_DateTimeInserted" style="cursor: pointer;">DateTimeInserted </DIV></TH></TR></THEAD></TABLE> Here is the complete function: $('table.scroll').jqGrid({ datatype: function(postdata) { mtype: "POST", $.ajax({ url: 'EDI.asmx/GetTestJSONString', type: "POST", contentType: "application/json; charset=utf-8", data: "{}", dataType: "text", //not json . let me try to parse success: function(msg, st) { if (st == "success") { var gridData; //strip of "d:" notation var result = JSON.parse(msg); for (var property in result) { gridData = result[property]; break; } var objGridData = eval("(" + gridData + ")"); //creates an object with visible data and structure var tbl = jQuery('table.scroll')[0]; alert(objGridData.rows[0].PartnerID); //displays the correct data //tbl.addJSONData(objGridData); //error received: addJSONData not a function //error received: addJSONData not a function (This uses eval as shown in the documentation) //tbl.addJSONData(eval("(" + objGridData + ")")); //the line below evaluates fine, creating an object and visible data and structure //var objGridData = eval("(" + gridData + ")"); //BUT, the same thing will not work here //tbl.addJSONData(eval("(" + gridData + ")")); //FIREBUG SHOWS THIS AS THE VALUE OF gridData: // "{"total":"1","page":"1","records":"5","rows":[{"ID":1,"PartnerID":"BCN","DateTimeInserted":new Date(1214412777787)},{"ID":2,"PartnerID":"BCN","DateTimeInserted":new Date(1212088125000)},{"ID":3,"PartnerID":"BCN","DateTimeInserted":new Date(1212088125547)},{"ID":4,"PartnerID":"EHG","DateTimeInserted":new Date(1235603192033)},{"ID":5,"PartnerID":"EMDEON","DateTimeInserted":new Date(1235603192000)}]}" } } }); }, jsonReader: { root: "rows", //arry containing actual data page: "page", //current page total: "total", //total pages for the query records: "records", //total number of records repeatitems: false, id: "ID" //index of the column with the PK in it }, colNames: [ 'ID', 'PartnerID', 'DateTimeInserted' ], colModel: [ { name: 'ID', index: 'ID', width: 55 }, { name: 'PartnerID', index: 'PartnerID', width: 90 }, { name: 'DateTimeInserted', index: 'DateTimeInserted', width: 100}], rowNum: 10, rowList: [10, 20, 30], imgpath: 'http://localhost/DNN5/js/jQuery/jqGrid-3.4.3/themes/sand/images', pager: jQuery('#pager'), sortname: 'ID', viewrecords: true, sortorder: "desc", caption: "TEST Example")};

    Read the article

  • Hibernate won't save into database

    - by Blitzkr1eg
    I mapped some classes to some tables with hibernate in java, i set hibernate to show SQL, it opens the session, it show that id does the SQL, it closes the session but there are no modifications to the database. Entity public class Profesor implements Comparable<Profesor> { private int id; private String nume; private String prenume; private int departament_id; private Set<Disciplina> listaDiscipline; //the teacher gives some courses} public class Disciplina implements Comparable<Disciplina>{ //the course class private int id; private String denumire; private String syllabus; private String schNotare; private Set<Lectie> lectii; private Set<Tema> listaTeme; private Set<Grup> listaGrupuri; // the course gets teached/assigmened to some groups of students private Set<Assignment> listAssignments;} Mapping <hibernate-mapping default-cascade="all"> <class name="model.Profesor" table="devgar_scoala.profesori"> <id name="id" column="id"> <generator class="increment"/> </id> <set name="listaDiscipline" table="devgar_scoala.`profesori-discipline`"> <key column="Profesori_id" /> <many-to-many class="model.Disciplina" column="Discipline_id" /> </set> <property name="nume" column="Nume" type="string" /> <property name="prenume" column="Prenume" type="string" /> <property name="departament_id" column="Departamente_id" type="integer" /> </class> <class name="model.Grup" table="devgar_scoala.grupe"> <id name="id" unsaved-value="0"> <generator class="increment"/> </id> <set name="listaStudenti" table="devgar_scoala.`studenti-grupe`"> <key column="Grupe_id" /> <many-to-many column="Studenti_nrMatricol" class="model.Student" /> </set> <property name="nume" column="Grupa" type="string"/> <property name="programStudiu" column="progStudii_id" type="integer" /> </class> <class name="model.Disciplina" table="devgar_scoala.discipline" > <id name="id" > <generator class="increment"/> </id> <property name="denumire" column="Denumire" type="string"/> <property name="syllabus" type="string" column="Syllabus"/> <property name="schNotare" type="string" column="SchemaNotare"/> <set name="listaGrupuri" table="devgar_scoala.`Discipline-Grupe`"> <key column="Discipline_id" /> <many-to-many column="Grupe_id" class="model.Grup" /> </set> <set name="lectii" table="devgar_scoala.lectii"> <key column="Discipline_id" not-null="true"/> <one-to-many class="model.Lectie" /> </set> </class> The only 'funny' thing is that the Profesor object gets loaded not with/by Hibernate but with manual classic SQL Java. Thats why i save the Profesor object like this p - the manually loaded Profesor object Profesor p2 = (Profesor) session.merge(p); session.saveOrUpdate(p2); //flush session of course After this i get in the Java console: Hibernate: insert into devgar_scoala.grupe (Grupa, progStudii_id, id) values (?, ?, ?) but when i look into the database there are no new rows in the table Grupe (the Groups table)

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >