Search Results

Search found 69106 results on 2765 pages for 'missing data'.

Page 83/2765 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • EMEA Analytics & Data Integration Oracle Partner Forum

    - by milomir.vojvodic
    MONDAY 12TH NOVEMBER, 2012 IN LONDON (UK) For Oracle Partners across Europe, Middle East and Africa: come to hear the latest news from Oracle OpenWorld about Oracle BI & Data Integration, and propel your business growth as an Oracle partner. This event should appeal to BI or Data Integration specialized partners, Executives, Sales, Pre-sales and Solution architects: with a choice of participation in the plenary day and then a set of special interest (technical) sessions. The follow on breakout sessions from the 13th November provide deeper dives and technical training for those of you who wish to stay for more detailed and hands-on workshops. Keynote: Andrew Sutherland, SVP Oracle Technology Hot agenda items will include: The Fusion Middleware Stack: Engineered to work together A complete Analytics and Data Integration Solution Architecture: Big Data and Little Data combined In-Memory Analytics for Extreme Insight Latest Product Development Roadmap for Data Integration and Analytics Venue: Oracles London CITY Moorgate Offices Places are limited, Register from this Link Note: Registration for the conference and the deeper dives and technical training is free of charge to OPN member Partners, but you will be responsible for your own travel and hotel expenses. Event Schedule During this event you can learn about partner success stories, participate in an array of break-out sessions, exchange information with other partners and enjoy a vibrant panel discussion. Nov. 12th  : Day 1 Main Plenary Session : Full day, starting 10.30 am.  Oracle Hosted Dinner in the Evening Nov. 13th  onwards Architecture Masterclass : IM Reference Architecture – Big Data and Little Data combined (1 day) BI-Apps Bootcamp  (4-days) Oracle GoldenGate workshop (1 day) Oracle Data Integrator and Oracle Enterprise Data Quality workshop (1 day) For further information and detail download the Agenda (pdf) or contact Michael Hallett at [email protected] and Milomir Vojvodic at [email protected] v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Ldap ssh authentication is super slow... any way to speed it up?

    - by Johnathon
    I am running OpenSUSE. Here is the output of ssh -vvv: OpenSSH_5.8p1, OpenSSL 1.0.0c 2 Dec 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to <ipaddress> [ipaddress] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug3: Incorrect RSA1 identifier debug3: Could not load "/root/.ssh/id_rsa" as a RSA1 public key debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /root/.ssh/id_rsa type 1 debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: identity file /root/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1 debug1: match: OpenSSH_5.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "ipaddress" from file "/root/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /root/.ssh/known_hosts:4 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 138/256 debug2: bits set: 529/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA cb:7f:ff:2e:65:28:f0:95:e6:8a:71:24:2a:67:02:2b debug3: load_hostkeys: loading entries for host "<ipaddress>" from file "/root/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /root/.ssh/known_hosts:4 debug3: load_hostkeys: loaded 1 keys debug1: Host '<ipaddress>' is known and matches the RSA host key. debug1: Found key in /root/.ssh/known_hosts:4 debug2: bits set: 504/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /root/.ssh/id_rsa (0xb789d5c8) debug2: key: /root/.ssh/id_dsa ((nil)) debug2: key: /root/.ssh/id_ecdsa ((nil)) debug1: Authentications that can continue: publickey,keyboard-interactive debug3: start over, passed a different list publickey,keyboard-interactive debug3: preferred publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /root/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply It hangs here for a good 30 seconds to a minute then debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /root/.ssh/id_dsa debug3: no such identity: /root/.ssh/id_dsa debug1: Trying private key: /root/.ssh/id_ecdsa debug3: no such identity: /root/.ssh/id_ecdsa debug2: we did not send a packet, disable method debug3: authmethod_lookup keyboard-interactive debug3: remaining preferred: password debug3: authmethod_is_enabled keyboard-interactive debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 I added PubkeyAuthentication no to the /etc/ssh/ssh_config and the /etc/ssh/sshd_config which makes it faster getting to the password prompt, but the password prompt still takes some time. Any way to fix that? Here is where the password hangs debug3: packet_send2: adding 32 (len 25 padlen 7 extra_pad 64) debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 0 debug3: packet_send2: adding 48 (len 10 padlen 6 extra_pad 64) debug1: Authentication succeeded (keyboard-interactive). Authenticated to ipaddress ([ipaddress]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. FIXED!!!!!!!!!!!!!! What is did... In the nsswitch_conf I had ldap included in the group and passwd which slows it down a lot. Thank you everybody for your input passwd: compat group: files hosts: files dns networks: files dns

    Read the article

  • What is the data transfer speeds within the disk, to other devices?

    - by Kumar
    I use Debian 6 on a HP Elitbook 6930 with 2Gigs of RAM. I was copying two AVI files, 1.5 GB in total, and noticed that the data copying was done at the rate of 4MB/sec. When I copy same AVIs to my Western Digital Passport 25G USB plugin drive the data transfer speed is 11+MB/sec. This speed is different if I plugin the drive to different USB ports. Interestingly, at work I also downloaded a 16MB IE8 exe on my virtual xp, run inside Oracle Sun Virtual Box, and it was downloaded AND saved within a second. We have a high speed network at work. :-) Why and how this difference in data speeds is possible?

    Read the article

  • What's the best way of handling permissions for apache2's user www-data in /var/www ?

    - by gyaresu
    Has anyone got a nice solution for handling files in /var/www/ ? We're running Name Based Virtual Hosts and the apache2 user is 'www-data' We've got two regular users & root. So when messing with files in /var/www ,rather than having to... chown -R www-data:www-data ...all the time, what's a good way of handling this? Supplementary question. How hardcore do you then go on permissions? This one has always been a problem in collaborative development environments. Cheers.

    Read the article

  • One hard drive missing after converting from ide to ahci

    - by user700859
    I have two 500GB SATA HDD in my desktop running XP. After converting from IDE to AHCI, the HDD I use to store data can no longer be accessed from XP. It shows only the first partition as 150GB, and data type is listed as RAW. However, if I boot into a live linux environment or if I pull that HDD out and connect it using USB, I can still get to all my files again. So I completely formatted the HDD into one single partition. But XP still does not recognize it. It still shows the disk as the 150GB partition and data type is still RAW. What's going on and how do I fix this? Thanks in advance.

    Read the article

  • LVM vs RAID0 vs RAID "linear" - Combine 2 disks as one, data recovery?

    - by leto
    Hi there, given two 2TB USB external disks that have to be combined to one 4TB volume and formatted with one big Filesystem (XFS), I have a small question to ask. Does LVM provide better Data recovery, should one disk be unplugged/damaged by being able to recover the data of the still working disk or is everything lost? I would appreciate a solution where only the data of one disk is lost and I can recover the content of the other with the usual filesystem/lvm/raid tools. Is that possible with LVM or RAID "linear"? This is for storing unimportant files that can be retrieved from backup, but I want to save time :) Thank you in advance

    Read the article

  • [ASP.NET 4.0] Persisting Row Selection in Data Controls

    - by HosamKamel
    Data Control Selection Feature In ASP.NET 2.0: ASP.NET Data Controls row selection feature was based on row index (in the current page), this of course produce an issue if you try to select an item in the first page then navigate to the second page without select any record you will find the same row (with the same index) selected in the second page! In the sample application attached: Select the second row in the books GridView. Navigate to second page without doing any selection You will find the second row in the second page selected. Persisting Row Selection: Is a new feature which replace the old selection mechanism which based on row index to be based on the row data key instead. This means that if you select the third row on page 1 and move to page 2, nothing is selected on page 2. When you move back to page 1, the third row is still selected. Data Control Selection Feature In ASP.NET 3.5 SP1: The Persisting Row Selection was initially supported only in Dynamic Data projects Data Control Selection Feature In ASP.NET 4.0: Persisted selection is now supported for the GridView and ListView controls in all projects. You can enable this feature by setting the EnablePersistedSelection property, as shown below: Important thing to note, once you enable this feature you have to set the DataKeyNames property too because as discussed the full approach is based on the Row Data Key Simple feature but  is a much more natural behavior than the behavior in earlier versions of ASP.NET. Download Demo Project

    Read the article

  • ssh permission denied

    - by Gitmo
    I am trying to ssh into a remote machine and I get the following debug messages: debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to xxx.xxx.x.xx [xxx.xxx.xx.x] port 22. debug1: Connection established. debug3: Not a RSA1 key file /home/hadoop/.ssh/id_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/hadoop/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-6ubuntu2 debug1: match: OpenSSH_5.1p1 Debian-6ubuntu2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-6ubuntu2 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-cbc hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 128/256 debug2: bits set: 511/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: check_host_in_hostfile: filename /home/hadoop/.ssh/known_hosts debug3: check_host_in_hostfile: match line 20 debug1: Host '192.168.1.63' is known and matches the RSA host key. debug1: Found key in /home/hadoop/.ssh/known_hosts:20 debug2: bits set: 511/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/hadoop/.ssh/id_rsa (0x241c110) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,gssapi,publickey,keyboard-interactive debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering public key: /home/hadoop/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey,password). What seems to be the problem?? I have tried everything, this is driving me nuts.

    Read the article

  • Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications

    - by kimberly.billings
    To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle recently introduced the Oracle Communications Data Model (OCDM). With the OCDM, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the OCDM, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Hong Kong Broadband Network, the fastest growing and second largest broadband service provider in Hong Kong, enhanced its data warehouse using Oracle Communications Data Model. It went live with OCDM within three months, and has increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Read more about HKBN's use of OCDM. Read more about OCDM var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Database Developers Can Now Save 20%

    - by stephen.garth
    Database developers can now increase productivity and save money at the same time. For a limited time, Oracle Store is offering a 20% discount on Oracle SQL Developer Data Modeler. Just enter the code SQLDDM at checkout to get the discount. Oracle SQL Developer Data Modeler is an independent, standalone product with a full spectrum of data and database modeling tools and utilities, including modeling for Entity Relationship Diagrams (ERD), Relational (database design), Data Type and Multi-dimensional modeling, full forward and reverse engineering and DDL code generation. SQL Developer Data Modeler can connect to any supported Oracle Database and is platform independent. Save 20% on Oracle SQL Developer Data Modeler at Oracle Store - Discount Code SQLDDM Find out more about Oracle SQL Developer and Oracle SQL Developer Data Modeler var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Ubuntu 11.10 - Everytime i am trying to connect to my box using SSH, its failing not connecting

    - by YumYumYum
    From any other PC doing SSH to my Ubuntu 11.10,is failing. Even the SSH is running: Other PC: retrying over and over $ ping 192.168.0.128 PING 192.168.0.128 (192.168.0.128) 56(84) bytes of data. From 192.168.0.226 icmp_seq=1 Destination Host Unreachable From 192.168.0.226 icmp_seq=2 Destination Host Unreachable From 192.168.0.226 icmp_seq=3 Destination Host Unreachable From 192.168.0.226 icmp_seq=4 Destination Host Unreachable $ sudo service iptables stop Stopping iptables (via systemctl): [ OK ] $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] Connection closed by 192.168.0.128 $ ssh [email protected] [email protected]'s password: Connection closed by UNKNOWN $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host Follow up: -- checked cable -- using cable tester and other detectors -- no problem found in cable -- used random 10 cables -- adapter is not broken -- checked it using circuit tester by opening the system (card is new so its not network adapter card problem) -- leds are OK showing -- used LiveCD and did same ping test was having same problem -- disabled ipv6 100% to make sure its not the cause -- disabled iptables 100% so its also not the issue -- some more info $ sudo killall dnsmasq -- did not solved the problem -- -- like many other Q/A was suggesting this same --- $ iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination $ netstat -nr Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 $ ssh -vvv [email protected] OpenSSH_5.6p1, OpenSSL 1.0.0j-fips 10 May 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.0.128 [192.168.0.128] port 22. debug1: Connection established. debug3: Not a RSA1 key file /home/sun/.ssh/id_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/sun/.ssh/id_rsa type 1 debug1: identity file /home/sun/.ssh/id_rsa-cert type -1 debug1: identity file /home/sun/.ssh/id_dsa type -1 debug1: identity file /home/sun/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p1 Debian-7ubuntu1 debug1: match: OpenSSH_5.8p1 Debian-7ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 118/256 debug2: bits set: 539/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: check_host_in_hostfile: host 192.168.0.128 filename /home/sun/.ssh/known_hosts debug3: check_host_in_hostfile: host 192.168.0.128 filename /home/sun/.ssh/known_hosts debug3: check_host_in_hostfile: match line 139 debug1: Host '192.168.0.128' is known and matches the RSA host key. debug1: Found key in /home/sun/.ssh/known_hosts:139 debug2: bits set: 544/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/sun/.ssh/id_rsa (0x213db960) debug2: key: /home/sun/.ssh/id_dsa ((nil)) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/sun/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/sun/.ssh/id_dsa debug3: no such identity: /home/sun/.ssh/id_dsa debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password [email protected]'s password: debug3: packet_send2: adding 64 (len 60 padlen 4 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). Authenticated to 192.168.0.128 ([192.168.0.128]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env ORBIT_SOCKETDIR debug3: Ignored env XDG_SESSION_ID debug3: Ignored env HOSTNAME debug3: Ignored env GIO_LAUNCHED_DESKTOP_FILE_PID debug3: Ignored env IMSETTINGS_INTEGRATE_DESKTOP debug3: Ignored env GPG_AGENT_INFO debug3: Ignored env TERM debug3: Ignored env HARDWARE_PLATFORM debug3: Ignored env SHELL debug3: Ignored env DESKTOP_STARTUP_ID debug3: Ignored env HISTSIZE debug3: Ignored env XDG_SESSION_COOKIE debug3: Ignored env GJS_DEBUG_OUTPUT debug3: Ignored env WINDOWID debug3: Ignored env GNOME_KEYRING_CONTROL debug3: Ignored env QTDIR debug3: Ignored env QTINC debug3: Ignored env GJS_DEBUG_TOPICS debug3: Ignored env IMSETTINGS_MODULE debug3: Ignored env USER debug3: Ignored env LS_COLORS debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env USERNAME debug3: Ignored env SESSION_MANAGER debug3: Ignored env GIO_LAUNCHED_DESKTOP_FILE debug3: Ignored env PATH debug3: Ignored env MAIL debug3: Ignored env DESKTOP_SESSION debug3: Ignored env QT_IM_MODULE debug3: Ignored env PWD debug1: Sending env XMODIFIERS = @im=none debug2: channel 0: request env confirm 0 debug1: Sending env LANG = en_US.utf8 debug2: channel 0: request env confirm 0 debug3: Ignored env KDE_IS_PRELINKED debug3: Ignored env GDM_LANG debug3: Ignored env KDEDIRS debug3: Ignored env GDMSESSION debug3: Ignored env SSH_ASKPASS debug3: Ignored env HISTCONTROL debug3: Ignored env HOME debug3: Ignored env SHLVL debug3: Ignored env GDL_PATH debug3: Ignored env GNOME_DESKTOP_SESSION_ID debug3: Ignored env LOGNAME debug3: Ignored env QTLIB debug3: Ignored env CVS_RSH debug3: Ignored env DBUS_SESSION_BUS_ADDRESS debug3: Ignored env LESSOPEN debug3: Ignored env WINDOWPATH debug3: Ignored env XDG_RUNTIME_DIR debug3: Ignored env DISPLAY debug3: Ignored env G_BROKEN_FILENAMES debug3: Ignored env COLORTERM debug3: Ignored env XAUTHORITY debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel_input_status_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_status_confirm: type 99 id 0 debug2: shell request accepted on channel 0 Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-12-generic x86_64) * Documentation: https://help.ubuntu.com/ 297 packages can be updated. 92 updates are security updates. New release '12.04 LTS' available. Run 'do-release-upgrade' to upgrade to it. Last login: Fri Jun 8 07:45:15 2012 from 192.168.0.226 sun@SystemAX51:~$ ping 19<--------Lost connection again-------------- Tail follow: -- dmesg is showing a very abnormal logs, like Ubuntu is automatically bringing the eth0 up, where eth0 is getting also auto down. [ 2025.897511] r8169 0000:02:00.0: eth0: link up [ 2029.347649] r8169 0000:02:00.0: eth0: link up [ 2030.775556] r8169 0000:02:00.0: eth0: link up [ 2038.242203] r8169 0000:02:00.0: eth0: link up [ 2057.267801] r8169 0000:02:00.0: eth0: link up [ 2062.871770] r8169 0000:02:00.0: eth0: link up [ 2082.479712] r8169 0000:02:00.0: eth0: link up [ 2285.630797] r8169 0000:02:00.0: eth0: link up [ 2308.417640] r8169 0000:02:00.0: eth0: link up [ 2480.948290] r8169 0000:02:00.0: eth0: link up [ 2824.884798] r8169 0000:02:00.0: eth0: link up [ 3030.022183] r8169 0000:02:00.0: eth0: link up [ 3306.587353] r8169 0000:02:00.0: eth0: link up [ 3523.566881] r8169 0000:02:00.0: eth0: link up [ 3619.839585] r8169 0000:02:00.0: eth0: link up [ 3682.154393] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) [ 3899.866854] r8169 0000:02:00.0: eth0: link up [ 4723.978269] r8169 0000:02:00.0: eth0: link up [ 4807.415682] r8169 0000:02:00.0: eth0: link up [ 5101.865686] r8169 0000:02:00.0: eth0: link up How do i fix it? -- http://ubuntuforums.org/showthread.php?t=1959794 -- apt-get install openipml openhpi-plugin-ipml

    Read the article

  • Everytime i am trying to connect to my box using SSH, its failing not connecting

    - by YumYumYum
    From any other PC doing SSH to my Ubuntu 11.10,is failing. My network setup: Telenet ISP (Belgium) Fiber cable < RJ45 cable straight to Ubuntu PC Even the SSH is running: Other PC: retrying over and over $ ping 192.168.0.128 PING 192.168.0.128 (192.168.0.128) 56(84) bytes of data. From 192.168.0.226 icmp_seq=1 Destination Host Unreachable From 192.168.0.226 icmp_seq=2 Destination Host Unreachable From 192.168.0.226 icmp_seq=3 Destination Host Unreachable From 192.168.0.226 icmp_seq=4 Destination Host Unreachable $ sudo service iptables stop Stopping iptables (via systemctl): [ OK ] $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] Connection closed by 192.168.0.128 $ ssh [email protected] [email protected]'s password: Connection closed by UNKNOWN $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host Follow up: -- checked cable -- using cable tester and other detectors -- no problem found in cable -- used random 10 cables -- adapter is not broken -- checked it using circuit tester by opening the system (card is new so its not network adapter card problem) -- leds are OK showing -- used LiveCD and did same ping test was having same problem -- disabled ipv6 100% to make sure its not the cause -- disabled iptables 100% so its also not the issue -- some more info $ nmap 192.168.0.128 Starting Nmap 5.50 ( http://nmap.org ) at 2012-06-08 19:11 CEST Nmap scan report for 192.168.0.128 Host is up (0.00045s latency). All 1000 scanned ports on 192.168.0.128 are closed (842) or filtered (158) Nmap done: 1 IP address (1 host up) scanned in 6.86 seconds ubuntu@ubuntu:~$ netstat -aunt | head Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN tcp 0 1 192.168.0.128:58616 74.125.132.99:80 FIN_WAIT1 tcp 0 0 192.168.0.128:56749 199.7.57.72:80 ESTABLISHED tcp 0 1 192.168.0.128:58614 74.125.132.99:80 FIN_WAIT1 tcp 0 0 192.168.0.128:49916 173.194.65.113:443 ESTABLISHED tcp 0 1 192.168.0.128:45699 64.34.119.101:80 SYN_SENT tcp 0 0 192.168.0.128:48404 64.34.119.12:80 ESTABLISHED tcp 0 0 192.168.0.128:54161 67.201.31.70:80 TIME_WAIT $ sudo killall dnsmasq -- did not solved the problem -- -- like many other Q/A was suggesting this same --- $ iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination $ netstat -nr Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 $ ssh -vvv [email protected] OpenSSH_5.6p1, OpenSSL 1.0.0j-fips 10 May 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.0.128 [192.168.0.128] port 22. debug1: Connection established. debug3: Not a RSA1 key file /home/sun/.ssh/id_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/sun/.ssh/id_rsa type 1 debug1: identity file /home/sun/.ssh/id_rsa-cert type -1 debug1: identity file /home/sun/.ssh/id_dsa type -1 debug1: identity file /home/sun/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p1 Debian-7ubuntu1 debug1: match: OpenSSH_5.8p1 Debian-7ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 118/256 debug2: bits set: 539/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: check_host_in_hostfile: host 192.168.0.128 filename /home/sun/.ssh/known_hosts debug3: check_host_in_hostfile: host 192.168.0.128 filename /home/sun/.ssh/known_hosts debug3: check_host_in_hostfile: match line 139 debug1: Host '192.168.0.128' is known and matches the RSA host key. debug1: Found key in /home/sun/.ssh/known_hosts:139 debug2: bits set: 544/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/sun/.ssh/id_rsa (0x213db960) debug2: key: /home/sun/.ssh/id_dsa ((nil)) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/sun/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/sun/.ssh/id_dsa debug3: no such identity: /home/sun/.ssh/id_dsa debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password [email protected]'s password: debug3: packet_send2: adding 64 (len 60 padlen 4 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). Authenticated to 192.168.0.128 ([192.168.0.128]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env ORBIT_SOCKETDIR debug3: Ignored env XDG_SESSION_ID debug3: Ignored env HOSTNAME debug3: Ignored env GIO_LAUNCHED_DESKTOP_FILE_PID debug3: Ignored env IMSETTINGS_INTEGRATE_DESKTOP debug3: Ignored env GPG_AGENT_INFO debug3: Ignored env TERM debug3: Ignored env HARDWARE_PLATFORM debug3: Ignored env SHELL debug3: Ignored env DESKTOP_STARTUP_ID debug3: Ignored env HISTSIZE debug3: Ignored env XDG_SESSION_COOKIE debug3: Ignored env GJS_DEBUG_OUTPUT debug3: Ignored env WINDOWID debug3: Ignored env GNOME_KEYRING_CONTROL debug3: Ignored env QTDIR debug3: Ignored env QTINC debug3: Ignored env GJS_DEBUG_TOPICS debug3: Ignored env IMSETTINGS_MODULE debug3: Ignored env USER debug3: Ignored env LS_COLORS debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env USERNAME debug3: Ignored env SESSION_MANAGER debug3: Ignored env GIO_LAUNCHED_DESKTOP_FILE debug3: Ignored env PATH debug3: Ignored env MAIL debug3: Ignored env DESKTOP_SESSION debug3: Ignored env QT_IM_MODULE debug3: Ignored env PWD debug1: Sending env XMODIFIERS = @im=none debug2: channel 0: request env confirm 0 debug1: Sending env LANG = en_US.utf8 debug2: channel 0: request env confirm 0 debug3: Ignored env KDE_IS_PRELINKED debug3: Ignored env GDM_LANG debug3: Ignored env KDEDIRS debug3: Ignored env GDMSESSION debug3: Ignored env SSH_ASKPASS debug3: Ignored env HISTCONTROL debug3: Ignored env HOME debug3: Ignored env SHLVL debug3: Ignored env GDL_PATH debug3: Ignored env GNOME_DESKTOP_SESSION_ID debug3: Ignored env LOGNAME debug3: Ignored env QTLIB debug3: Ignored env CVS_RSH debug3: Ignored env DBUS_SESSION_BUS_ADDRESS debug3: Ignored env LESSOPEN debug3: Ignored env WINDOWPATH debug3: Ignored env XDG_RUNTIME_DIR debug3: Ignored env DISPLAY debug3: Ignored env G_BROKEN_FILENAMES debug3: Ignored env COLORTERM debug3: Ignored env XAUTHORITY debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel_input_status_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_status_confirm: type 99 id 0 debug2: shell request accepted on channel 0 Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-12-generic x86_64) * Documentation: https://help.ubuntu.com/ 297 packages can be updated. 92 updates are security updates. New release '12.04 LTS' available. Run 'do-release-upgrade' to upgrade to it. Last login: Fri Jun 8 07:45:15 2012 from 192.168.0.226 sun@SystemAX51:~$ ping 19<--------Lost connection again-------------- Tail follow: -- dmesg is showing a very abnormal logs, like Ubuntu is automatically bringing the eth0 up, where eth0 is getting also auto down. [ 2025.897511] r8169 0000:02:00.0: eth0: link up [ 2029.347649] r8169 0000:02:00.0: eth0: link up [ 2030.775556] r8169 0000:02:00.0: eth0: link up [ 2038.242203] r8169 0000:02:00.0: eth0: link up [ 2057.267801] r8169 0000:02:00.0: eth0: link up [ 2062.871770] r8169 0000:02:00.0: eth0: link up [ 2082.479712] r8169 0000:02:00.0: eth0: link up [ 2285.630797] r8169 0000:02:00.0: eth0: link up [ 2308.417640] r8169 0000:02:00.0: eth0: link up [ 2480.948290] r8169 0000:02:00.0: eth0: link up [ 2824.884798] r8169 0000:02:00.0: eth0: link up [ 3030.022183] r8169 0000:02:00.0: eth0: link up [ 3306.587353] r8169 0000:02:00.0: eth0: link up [ 3523.566881] r8169 0000:02:00.0: eth0: link up [ 3619.839585] r8169 0000:02:00.0: eth0: link up [ 3682.154393] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) [ 3899.866854] r8169 0000:02:00.0: eth0: link up [ 4723.978269] r8169 0000:02:00.0: eth0: link up [ 4807.415682] r8169 0000:02:00.0: eth0: link up [ 5101.865686] r8169 0000:02:00.0: eth0: link up How do i fix it? -- http://ubuntuforums.org/showthread.php?t=1959794 $ apt-get install openipml openhpi-plugin-ipml $ openipmish > help redisp_cmd on|off > redisp_cmd on redisp set Final follow up: Step 1: BUG for network card driver r8169 Step 2: get the latest build version http://www.realtek.com/downloads/downloadsView.aspx?Langid=1&PNid=4&PFid=4&Level=5&Conn=4&DownTypeID=3&GetDown=false&Downloads=true#RTL8110SC(L) Step 3: build / make $ cd /var/tmp/driver $ tar xvfj r8169.tar.bz2 $ make clean modules && make install $ rmmod r8169 $ depmod $ cp src/r8169.ko /lib/modules/3.xxxx/kernel/drivers/net/r8169.ko $ modprobe r8169 $ update-initramfs -u $ init 6 Voila!!

    Read the article

  • Cloud Computing and Data Utilization

    - by Ahsan Alam
    Someone recently asked me “is cloud computing going to change the way we perceive data?”. My first instinct was “off course”; but I restrained myself and thought for a moment. Then my answer was “no”. Why do I feel that way? Technology and business have evolved quite a bit in the past few decades; however, the need to effectively view and utilize data hasn't changed. It is not uncommon to see many organization to rely on multiple database management systems (DBMS). Applications and systems are often built to utilize information from all these data sources, and effectively present them to users. In addition to multiple DBMS, corporations are also housing their systems across numerous data centers. In fact, systems and data can reside anywhere around the world with the advent of globalization. Cloud based systems have simply provided us a different place to maintain our data, nothing more. Hosting costs, security and accessibility are different issues; however, the way we utilize and view these data remains the same.

    Read the article

  • 'Binary XML' for game data?

    - by bluescrn
    I'm working on a level editing tool that saves its data as XML. This is ideal during development, as it's painless to make small changes to the data format, and it works nicely with tree-like data. The downside, though, is that the XML files are rather bloated, mostly due to duplication of tag and attribute names. Also due to numeric data taking significantly more space than using native datatypes. A small level could easily end up as 1Mb+. I want to get these sizes down significantly, especially if the system is to be used for a game on the iPhone or other devices with relatively limited memory. The optimal solution, for memory and performance, would be to convert the XML to a binary level format. But I don't want to do this. I want to keep the format fairly flexible. XML makes it very easy to add new attributes to objects, and give them a default value if an old version of the data is loaded. So I want to keep with the hierarchy of nodes, with attributes as name-value pairs. But I need to store this in a more compact format - to remove the massive duplication of tag/attribute names. Maybe also to give attributes native types, so, for example floating-point data is stored as 4 bytes per float, not as a text string. Google/Wikipedia reveal that 'binary XML' is hardly a new problem - it's been solved a number of times already. Has anyone here got experience with any of the existing systems/standards? - are any ideal for games use - with a free, lightweight and cross-platform parser/loader library (C/C++) available? Or should I reinvent this wheel myself? Or am I better off forgetting the ideal, and just compressing my raw .xml data (it should pack well with zip-like compression), and just taking the memory/performance hit on-load?

    Read the article

  • Which would be a better way to load data via ajax

    - by Mike
    I am using google maps and returning html/lat/long from my MySQL database Currently A user picks a business category e.g; "Video Production". an ajax call is sent to a CodeIgniter controller the Controller then queries the db, and returns the following data via JSON Lat/Long of the marker HTML for the popup window this is approximately 34 rows in the database across two tables per business the ajax call receives this data and then plots the marker along with the html onto the map The data that is returned from the controller is one big json object... This is done for all businesses that exist in the Video Production category (currently approx 40 businesses). As you can see, pulling this data for multiple categories (100s of businesses) can get very very taxing on the server. My question is Would it be more beneficial to modify the process flow as such: a user picks a business category e.g; "Video Production". an ajax call is sent to a CodeIgniter controller the controller then queries the database for the location base information lat/long level (used to change marker icon color) This would be a single row per business with several columns the ajax call receives this data and then plots the marker on the map when the user clicks a marker an ajax call is sent to a CodeIgniter Controller the controller queries the database for the HTML and additional data based on business_id and if not, what are some better suggestions to this problem? In summary this means rather than including the HTML and additional data along for each business, only submitting minimal location information and then re-query for that information when each business marker is clicked. Potential Downsides longer load times when a user clicks a marker icon more code?? more queries to the database

    Read the article

  • Generating Deep Arrays: Shallow to Deep, Deep to Shallow or Bad idea?

    - by MobyD
    I'm working on an array structure that will be used as the data source for a report template in a web app. The data comes from relatively complex SQL queries that return one or many rows as one dimensional associative arrays. In the case of many, they are turned into two dimensional indexed array. The data is complex and in some cases there is a lot of it. To save trips to the database (which are extremely expensive in this scenario) I'm attempting to get all of the basic arrays (1 and 2 dimension raw database data) and put them, conditionally, into a single, five level deep array. Organizing the data in PHP seems like a better idea than by using where statements in the SQL. Array Structure Array of years( year => array of types( types => array of information( total => value, table => array of data( index => db array ) ) ) ) My first question is, is this a bad idea. Are arrays like this appropriate for this situation? If this would work, how should I go about populating it? My initial thought was shallow to deep, but the more I work on this, the more I realize that it'd be very difficult to abstract out the conditionals that determine where each item goes in the array. So it seems that starting from the most deeply nested data may be the approach I should take. If this is array abuse, what alternatives exist?

    Read the article

  • Common request: export #Tabular model and data to #PowerPivot

    - by Marco Russo (SQLBI)
    I received this request in many courses, messages and also forum discussions: having an Analysis Services Tabular model, it would be nice being able to extract a correspondent PowerPivot data model. In order of priority, here are the specific feature people (including me) would like to see: Create an empty PowerPivot workbook with the same data model of a Tabular model Change the connections of the tables in the PowerPivot workbook extracting data from the Tabular data model Every table should have an EVALUATE ‘TableName’ query in DAX Apply a filter to data extracted from every table For example, you might want to extract all data for a single country or year or customer group Using the same technique of applying filter used for role based security would be nice Expose an API to automate the process of creating a PowerPivot workbook Use case: prepare one workbook for every employee containing only its data, that he can use offline Common request for salespeople who want a mini-BI tool to use in front of the customer/lead/supplier, regardless of a connection available This feature would increase the adoption of PowerPivot and Tabular (and, therefore, Business Intelligence licenses instead of Standard), and would probably raise the sales of Office 2013 / Office 365 driven by ISV, who are the companies who requests this feature more. If Microsoft would do this, it would be acceptable it only works on Office 2013. But if a third-party will do that, it will make sense (for their revenues) to cover both Excel 2010 and Excel 2013. Another important reason for this feature is that the “Offline cube” feature that you have in Excel is not available when your PivotTable is connected to a Tabular model, but it can only be used when you connect to Analysis Services Multidimensional. If you think this is an important features, you can vote this Connect item.

    Read the article

  • Getting the relational table data into XML recursively

    - by Tom
    I have levels of tables (Level1, Level2, Level3, ...) For simplicity, we'll say I have 3 levels. The rows in the higher level tables are parents of lower level table rows. The relationship does not skip levels however. E.g. Row1Level1 is parent of Row3Level2, Row2Level2 is parent of Row4Level3. Level(n-1)'s parent is always be in Level(n). Given these tables with data, I need to come up with a recursive function that generates an XML file to represent the relationship and the data. E.g. <data> <level levelid = 1 rowid=1> <level levelid = 2 rowid=3 /> </level> <level levelid = 2 rowid=2> <level levelid = 3 rowid=4 /> </level> </data> I would like help with coming up with a pseudo-code for this setup. This is what I have so far: XElement GetXMLData(Table table, string identifier, XElement data) { XElement xmlData = data; if (table != null) { foreach (row in the table) { // Get subordinate table Table subordinateTable = GetSubordinateTable(table); // Get the XML Data for the children of current row xmlData += GetXMLData(subordinateTable, row.Identifier, xmlData); } } return xmlData; }

    Read the article

  • Field symbol and Data reference in SAP-ABAP

    - by PDP-21
    If we compare field symbol and data refernece with that of the pointer in C i concluded that :- In C language, Say we declare a variable "var" type "Integer" with default value "5". The variable "var" will be stored some where in the memory and say the memory address which holds this variable is "1000". Now we define a pointer "ptr" and this pointer is assigned to our variable. So, "&ptr" will be "1000" and " *ptr " will be 5. Lets comapre the above situation in SAP ABAP. Here we declare a Field symbol "FS" and assign that to the variable "var". Now my question is what "FS" holds ? I have searched this rigorously in the internet but found out many ABAP consultants have the opinion that FS holds the address of the variable i.e. 1000. But that is wrong. While debugging i found out that fs holds only 5. So fs (in ABAP) is equivalent to *ptr (in C). Please correct me if my understanding is wrong. Now lets declare a data reference "dref" and another filed symbol "fsym" and after creating the data reference we assign the same to field symbol . Now we can do operations on this field symbol. So the difference between data refernec and field symbol is :- in case of field symbol first we will declare a variable and assign it to a field symbol. in case of data reference first we craete a data reference and then assign that to field symbol. Then what is the use of data reference? The same functionality we can achive through field symbol also.

    Read the article

  • How should I structure my database to gain maximum efficiently in this scenario?

    - by Bob Jansen
    I'm developing a PHP script that analyzes the web traffic of my clients websites. By placing a link to a javascript on the clients website (think of Google Analyses), my script harvests information like: the visitors IP address, reference link, current page link, user agent, etc. Now my clients can view these statistics via a control panel that I have build. These clients can also adjust profile settings, set firewall rules, create support tickets and pay invoices. Currently all the the traffic is stored in one table. You can imagine that this tabel would become very large as some my clients receive thousands of pageviews per day. Furthermore, all the traffic data of each client would be stored in the same table, creating a mess. This is the same for the firewall rules currently, and the invoice and support system. I'm looking for way to structure my database in a more organized way to hold large amounts of data of multiple users. This is the first project that I'm developing that deals with so much data, and would like to hear suggestions and tips. I was thinking of using multiple databases to structure the data. The main database will store users data (email,pass,id,etc) admin/website settings. Than each client will have an unique database labeled prefix_userid, which carry tables holding their traffic, invoice, and support ticket data. Would this be a solution, and would it slow down or speed up overall performances (that is spreading the data over muliple databases). I have a solid VPS, but would like to safe and be as effient as possible.

    Read the article

  • A Cost Effective Solution to Securing Retail Data

    - by MichaelM-Oracle
    By Mike Wion, Director, Security Solutions, Oracle Consulting Services As so many noticed last holiday season, data breaches, especially those at major retailers, are now a significant risk that requires advance preparation. The need to secure data at all access points is now driven by an expanding privacy and regulatory environment coupled with an increasingly dangerous world of hackers, insider threats, organized crime, and other groups intent on stealing valuable data. This newly released Oracle whitepaper entitled Cost Effective Security Compliance with Oracle Database 12c outlines a powerful story related to a defense in depth, multi-layered, security model that includes preventive, detective, and administrative controls for data security. At Oracle Consulting Services (OCS), we help to alleviate the fears of massive data breach by providing expert services to assist our clients with the planning and deployment of Oracle’s Database Security solutions. With our deep expertise in Oracle Database Security, Oracle Consulting can help clients protect data with the security solutions they need to succeed with architecture/planning, implementation, and expert services; which, in turn, provide faster adoption and return on investment with Oracle solutions. On June 10th at 10:00AM PST , Larry Ellison will present an exclusive webcast entitled “The Future of Database Begins Soon”. In this webcast, Larry will launch the highly anticipated Oracle Database In-Memory technology that will make it possible to perform true real-time, ad-hoc, analytic queries on your organization’s business data as it exists at that moment and receive the results immediately. Imagine real-time analytics available across your existing Oracle applications! Click here to download the whitepaper entitled Cost Effective Security Compliance with Oracle Database 12c.

    Read the article

  • Displaying a Paged Grid of Data in ASP.NET MVC

    This article demonstrates how to display a paged grid of data in an ASP.NET MVC application and builds upon the work done in two earlier articles: Displaying a Grid of Data in ASP.NET MVC and Sorting a Grid of Data in ASP.NET MVC. Displaying a Grid of Data in ASP.NET MVC started with creating a new ASP.NET MVC application in Visual Studio, then added the Northwind database to the project and showed how to use Microsoft's Linq-to-SQL tool to access data from the database. The article then looked at creating a Controller and View for displaying a list of product information (the Model). Sorting a Grid of Data in ASP.NET MVC enhanced the application by adding a view-specific Model (ProductGridModel) that provided the View with the sorted collection of products to display along with sort-related information, such as the name of the database column the products were sorted by and whether the products were sorted in ascending or descending order. The Sorting a Grid of Data in ASP.NET MVC article also walked through creating a partial view to render the grid's header row so that each column header was a link that, when clicked, sorted the grid by that column. In this article we enhance the view-specific Model (ProductGridModel) to include paging-related information to include the current page being viewed, how many records to show per page, and how many total records are being paged through. Next, we create an action in the Controller that efficiently retrieves the appropriate subset of records to display and then complete the exercise by building a View that displays the subset of records and includes a paging interface that allows the user to step to the next or previous page, or to jump to a particular page number, we create and use a partial view that displays a numeric paging interface Like with its predecessors, this article offers step-by-step instructions and includes a complete, working demo available for download at the end of the article. Read on to learn more! Read More >

    Read the article

  • Item missing/damaged from Amazon order? Steps to quickly contact support to get replacements

    - by Gopinath
    I ordered few items from Amazon.com last week and when the package is delivered I found an item is missing. I reached out to Amazon Customer care and they resolved the issue by resending the missing item. Amazon’s customer care is awesome! They just asked me few questions regarding the order, person who accepted the delivery and without any further delay they provided details on when I’ll get the replacement item. Best part is Amazon customer care support is available through email, chat & telephone. I chose support through a chat session and resolved the issue. If you also missed an item or found a damaged item then here are the steps to be followed to get a replacement item Step 1 -  Go to amazon.com/contact Step 2 -  Click on Contact Us button available in General Support section, you may have to provide authentication details Step 3 – Contact Us screen displays your recent order. If your query is related to the displayed order then proceed go to next step otherwise select your order by clicking on Choose Different Order button Step 4 – Select the missing/damaged items from the list of items displayed from your order Step 5 – Choose the issue with your order. In case if an item is missing you may want to choose “Problem with an order”, “Missing item or parts” and “Entire item missing from shipment”. You may want to choose the options that are close to the issue you are facing with your order Step 6 – Choose how you want to support. You can choose either “E-Mail”, “Phone” or “Chat”. Based on the selected Support mode, proceed and communicate to them about the issue.  Rest assured. They will resolve your issue quickly.

    Read the article

  • filtering dates in a data view webpart when using webservices datasource

    - by Patrick Olurotimi Ige
    I was working on a data view web part recently and i had  to filter the data based on dates.Since the data source was web services i couldn't use  the Offset which i blogged about earlier.When using web services to pull data in sharepoint designer you would have to use xpath.So for example this is the soap that populates the rows<xsl:variable name="Rows" select="/soap:Envelope/soap:Body/ddw1:GetListItemsResponse/ddw1:GetListItemsResult/ddw1:listitems/rs:data/z:row/>But you would need to add some predicate [] and filter the date nodes.So you can do something like this (marked in red)<xsl:variable name="Rows" select="/soap:Envelope/soap:Body/ddw1:GetListItemsResponse/ddw1:GetListItemsResult/ddw1:listitems/rs:data/z:row[ddwrt:FormatDateTime(string(@ows_Created),1033,'yyyyMMdd') &gt;= ddwrt:FormatDateTime(string(substring-after($fd,'#')),1033,'yyyyMMdd')]"/>For the filtering to work you need to have the date formatted  above as yyyyMMdd.One more thing you must have noticed is the $fd variable.This variable is created by me creating a calculated column in the list so something like this [Created]-2So basically that the xpath is doing is get me data only when the Created date  is greater than or equal to the Created date -2 which is 2 date less than the created date.Also not that when using web services in sharepoint designer and try to use the default filtering you won't get to see greater tha or less than in the option list comparison.:(Hope this helps.

    Read the article

  • Methods for getting static data from obj-c to Parse (database)

    - by Phil
    I'm starting out thinking out how best to code my latest game on iOS, and I want to use parse.com to store pretty much everything, so I can easily change things. What I'm not sure about is how to get static data into parse, or at least the best method. I've read about using NSMutableDictionary, pLists, JSON, XML files etc. Let's say for example in AS3 I could create a simple data object like so... static var playerData:Object = {position:{startX:200, startY:200}}; Stick it in a static class, and bingo I've got my static data to use how I see fit. Basically I'm looking for the best method to do the same thing in Obj-c, but the data is not to be stored directly in the iOS game, but to be sent to parse.com to be stored on their database there. The game (at least the distribution version) will only load data from the parse database, so however I'm getting the static data into parse I want to be able to remove it from being included in the eventual iOS file. So any ideas on the best methods to sort that? If I had longer on this project, it might be nice to use storyboards and create a simple game editor to send data to parse....actually maybe that's a good idea, it's just I'm new to obj-c and I'm looking for the most straightforward (see quickest) way to achieve things. Thanks for any advice.

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >