Search Results

Search found 10747 results on 430 pages for 'password'.

Page 404/430 | < Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >

  • External USB 3 drive not recognized

    - by ilan123
    Ubuntu 12.10 64 bit seems not to recognize my external hard disk. It is a Vantec NST-310S3 external disk enclosure with a WD 3TB drive. The disk has two NTFS partitions. My PC is a dual boot system. Under Windows 7 the hard disk works fine but I can't make it work with Ubuntu. When the drive is connected to the PC then the command sudo fdisk -l seems to hang forever. Below are the output of lsusb and cat /proc/partitions without the external drive and then with it connected. I added also the last lines of the dmesg command at the end. First without the drive: ilan@linux:~$ lsusb Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 13ba:0017 Unknown PS/2 Keyboard+Mouse Adapter Bus 001 Device 004: ID 046d:c50e Logitech, Inc. Cordless Mouse Receiver Bus 001 Device 005: ID 0ac8:3420 Z-Star Microelectronics Corp. Venus USB2.0 Camera ilan@linux:~$ cat /proc/partitions major minor #blocks name 8 0 1953514584 sda 8 1 102400 sda1 8 2 629043200 sda2 8 3 367001600 sda3 8 4 1 sda4 8 5 471859200 sda5 8 6 157286400 sda6 8 7 324115456 sda7 8 8 4101120 sda8 11 0 1048575 sr0 Second with the USB 3 drive: ilan@linux:~$ lsusb Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 004 Device 002: ID 174c:55aa ASMedia Technology Inc. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 13ba:0017 Unknown PS/2 Keyboard+Mouse Adapter Bus 001 Device 004: ID 046d:c50e Logitech, Inc. Cordless Mouse Receiver Bus 001 Device 005: ID 0ac8:3420 Z-Star Microelectronics Corp. Venus USB2.0 Camera ilan@linux:~$ cat /proc/partitions major minor #blocks name 8 0 1953514584 sda 8 1 102400 sda1 8 2 629043200 sda2 8 3 367001600 sda3 8 4 1 sda4 8 5 471859200 sda5 8 6 157286400 sda6 8 7 324115456 sda7 8 8 4101120 sda8 11 0 1048575 sr0 8 16 2930266584 sdb ilan@linux:~$ lsusb -v -s 004:002 Bus 004 Device 002: ID 174c:55aa ASMedia Technology Inc. Couldn't open device, some information will be missing Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 9 idVendor 0x174c ASMedia Technology Inc. idProduct 0x55aa bcdDevice 1.00 iManufacturer 2 iProduct 3 iSerial 1 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 44 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk-Only iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0400 1x 1024 bytes bInterval 0 bMaxBurst 15 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0400 1x 1024 bytes bInterval 0 bMaxBurst 15 ilan@linux:~$ sudo fdisk -l [sudo] password for ilan: Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf1b4f1ee Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 1258293247 629043200 7 HPFS/NTFS/exFAT /dev/sda3 1258293248 1992296447 367001600 7 HPFS/NTFS/exFAT /dev/sda4 1992298494 3907028991 957365249 f W95 Ext'd (LBA) /dev/sda5 1992298496 2936016895 471859200 7 HPFS/NTFS/exFAT /dev/sda6 2936018944 3250591743 157286400 7 HPFS/NTFS/exFAT /dev/sda7 3250593792 3898824703 324115456 83 Linux /dev/sda8 3898826752 3907028991 4101120 82 Linux swap / Solaris dmesg output after connecting the external drive: [ 23.740567] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx [ 23.740786] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 49.144673] usb 4-1: >new SuperSpeed USB device number 2 using xhci_hcd [ 49.163039] usb 4-1: >Parent hub missing LPM exit latency info. Power management will be impacted. [ 49.166789] usb 4-1: >New USB device found, idVendor=174c, idProduct=55aa [ 49.166793] usb 4-1: >New USB device strings: Mfr=2, Product=3, SerialNumber=1 [ 49.166796] usb 4-1: >Product: AS2105 [ 49.166799] usb 4-1: >Manufacturer: ASMedia [ 49.166801] usb 4-1: >SerialNumber: 0123456789ABCDEF [ 49.206372] usbcore: registered new interface driver uas [ 49.228891] Initializing USB Mass Storage driver... [ 49.229042] scsi6 : usb-storage 4-1:1.0 [ 49.229115] usbcore: registered new interface driver usb-storage [ 49.229116] USB Mass Storage support registered. [ 64.045528] scsi 6:0:0:0: >Direct-Access WDC WD30 EZRX-00MMMB0 80.0 PQ: 0 ANSI: 0 [ 64.046224] sd 6:0:0:0: >Attached scsi generic sg2 type 0 [ 64.046881] sd 6:0:0:0: >[sdb] Very big device. Trying to use READ CAPACITY(16). [ 64.047610] sd 6:0:0:0: >[sdb] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB) [ 64.048368] sd 6:0:0:0: >[sdb] Write Protect is off [ 64.048373] sd 6:0:0:0: >[sdb] Mode Sense: 23 00 00 00 [ 64.048984] sd 6:0:0:0: >[sdb] No Caching mode page present [ 64.048987] sd 6:0:0:0: >[sdb] Assuming drive cache: write through [ 64.049297] sd 6:0:0:0: >[sdb] Very big device. Trying to use READ CAPACITY(16). [ 64.050942] sd 6:0:0:0: >[sdb] No Caching mode page present [ 64.050944] sd 6:0:0:0: >[sdb] Assuming drive cache: write through [ 94.245006] usb 4-1: >reset SuperSpeed USB device number 2 using xhci_hcd [ 94.262553] usb 4-1: >Parent hub missing LPM exit latency info. Power management will be impacted. [ 94.263805] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c00 [ 94.263808] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c40 [ 125.262722] usb 4-1: >reset SuperSpeed USB device number 2 using xhci_hcd [ 125.280304] usb 4-1: >Parent hub missing LPM exit latency info. Power management will be impacted. [ 125.281511] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c00 [ 125.281516] xhci_hcd 0000:03:00.0: >xHCI xhci_drop_endpoint called with disabled ep ffff8800d37d1c40

    Read the article

  • WebLogic JDBC Use of Oracle Wallet for SSL

    - by Steve Felts
    Introduction Secure Sockets Layer (SSL) can be used to secure the connection between the middle tier “client”, WebLogic Server (WLS) in this case, and the Oracle database server.  Data between WLS and database can be encrypted.  The server can be authenticated so you have proof that the database can be trusted by validating a certificate from the server.  The client can be authenticated so that the database only accepts connections from clients that it trusts. Similar to the discussion in an earlier article about using the Oracle wallet for database credentials, the Oracle wallet can also be used with SSL to store the keys and certificates.  By using it correctly, clear text passwords can be eliminated from the JDBC configuration and client/server configuration can be simplified by sharing the wallet across multiple datasources. There is a very good Oracle Technical White Paper on using SSL with the Oracle thin driver at http://www.oracle.com/technetwork/database/enterprise-edition/wp-oracle-jdbc-thin-ssl-130128.pdf [LINK1].  The link http://www.oracle.com/technetwork/middleware/weblogic/index-087556.html [LINK2] describes how to use WebLogic Server with Oracle JDBC Driver SSL. The information in this article is a guide on what steps need to be taken in the variety of available options; use the links above for details. SSL from the driver to the database server is basically turned on by specifying a protocol of “tcps” in the URL.  However, there is a fair amount of setup needed.  Also remember that there is an overhead in performance. Creating the wallets The common use cases are 1. “data encryption and server-only authentication”, requiring just a trust store, or 2. “data encryption and authentication of both tiers” (client and server), requiring a trust store and a key store. It is recommended to use the auto-login wallet type so that clear text passwords are not needed in the datasource configuration to open the wallet.  The store type for an auto-login wallet is “SSO” (Single Sign On), not “JKS” or “PKCS12” as in [LINK2].  The file name is “cwallet.sso”. Wallets are created using the orapki tool.  They need to be created based on the usage (encryption and/or authentication).  This is discussed in detail in [LINK1] in Appendix B or in the Advanced Security Administrator’s Guide of the Database documentation. Database Server Configuration It is necessary to update the sqlnet.ora and listener.ora files with the directory location of the wallet using WALLET_LOCATION.  These files also indicate whether or not SSL_CLIENT_AUTHENTICATION is being used (true or false). The Oracle Listener must also be configured to use the TCPS protocol.  The recommended port is 2484. LISTENER = (ADDRESS_LIST= (ADDRESS=(PROTOCOL=tcps)(HOST=servername)(PORT=2484))) WebLogic Server Classpath The WebLogic Server CLASSPATH must have three additional security files. The files that need to be added to the WLS CLASSPATH are $MW_HOME/modules/com.oracle.osdt_cert_1.0.0.0.jar $MW_HOME/modules/com.oracle.osdt_core_1.0.0.0.jar $MW_HOME/modules/com.oracle.oraclepki_1.0.0.0.jar One way to do this is to add them to PRE_CLASSPATH environment variable for use with the standard WebLogic scripts. Setting the Oracle Security Provider It’s necessary to enable the Oracle PKI provider on the client side.  This can either be done statically by updating the java.security file under the JRE or dynamically by setting it in a WLS startup class using java.security.Security.insertProviderAt(new oracle.security.pki.OraclePKIProvider (), 3); See the full example of the startup class in [LINK2]. Datasource Configuration When creating a WLS datasource, set the PROTOCOL in the URL to tcps as in the following. jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=host)(PORT=port))(CONNECT_DATA=(SERVICE_NAME=myservice))) For encryption and server authentication, use the datasource connection properties: - javax.net.ssl.trustStore=location of wallet file on the client - javax.net.ssl.trustStoreType=”SSO” For client authentication, use the datasource connection properties: - javax.net.ssl.keyStore=location of wallet file on the client - javax.net.ssl.keyStoreType=”SSO” Note that the driver connection properties for the wallet require a file name, not a directory name. Active GridLink ONS over SSL For completeness, there is another SSL usage for WLS datasources.  The communication with the Oracle Notification Service (ONS) for load balancing information and node up/down events can use SSL also. Create an auto-login wallet and use the wallet on the client and server.  The following is a sample sequence to create a test wallet for use with ONS. orapki wallet create -wallet ons -auto_login -pwd ONS_Wallet orapki wallet add -wallet ons -dn "CN=ons_test,C=US" -keysize 1024 -self_signed -validity 9999 -pwd ONS_Wallet orapki wallet export -wallet ons -dn "CN=ons_test,C=US" -cert ons/cert.txt -pwd ONS_Wallet On the database server side, it’s necessary to define the walletfile directory in the file $CRS_HOME/opmn/conf/ons.config and run onsctl stop/start. When configuring an Active GridLink datasource, the connection to the ONS must be defined.  In addition to the host and port, the wallet file directory must be specified.  By not giving a password, a SSO wallet is assumed. Summary To use SSL with the Oracle thin driver without any clear text passwords, use an SSO Oracle Wallet.  SSL support in the Oracle thin driver is available starting in 10g Release 2.

    Read the article

  • Converting LINQ to Twitter to Twitter API v1.1

    - by Joe Mayo
    Twitter recently updated their API to v1.1 (Current status: API v1.1). Naturally, LINQ to Twitter  needed to be updated too. This blog post outlines the changes made to LINQ to Twitter during this conversion and highlights important features that LINQ to Twitter developers will want to know. Overall Impact Generally speaking, Twitter API v1.1 is semantically very much the same as it’s predecessor. The base URL changed and so did a few resource segments, but the resources themselves are still intact. The good news is that LINQ to Twitter has always shielded the developer from this plumbing, so the entities, types, and filters didn’t change much at all.  The following sections describe what did  change. Authentication In Twitter API v1.0 authentication was not required for some resources, such as user timelines and search. However, that’s all changed because *all* queries must be authenticated in Twitter API v1.1. LINQ to Twitter has various types of authorizers you can use, supporting whatever OAuth options are available via Twitter.  You can see the LINQ to Twitter documentation, Securing Your Applications, for more info on OAuth support. The New Search One of the larger changes to the API was Search. To be more specific, the Search entity now contains a List<Status>, named Statuses, to hold results.  Additionally, any meta-data associated with the search is now in a property named SearchMetaData. The change to the Search entity and responses is the big change, but the good news is that your Search query syntax doesn’t change. Different Rate Limits The issue of rate limits itself is contentious, but this discussion is focused on the coding experience and I’ll leave the politics to those who prefer to engage in that activity. What’s important here is that both headers and resources have changed. You should review Twitter’s Rate Limit documentation to understand what the changes mean.  A quick explanation is that rate limits are applied individually to each resource in 15 minute time intervals. In LINQ to Twitter these changes surface on the Help entity, via HelpType.RateLimits. The RateLimits query has a Resources filter where you can specify a comma-separated list of categories to return rate limit info for.  The results materialize in the RateLimits dictionary, keyed on category. The Help entity also has a RateLimitsAuthorizationContext, holding the Access Token for the user performing queries – and to whom the rate limits apply. In addition to the new RateLimits query, there are new RateLimit headers that appear in the query response, whose HTTP header name is of the form X-Rate-Limit… which is different from the previous header name. LINQ to Twitter surfaces these headers via the existing properties of the TwitterContext instance. For anyone who retrieved rate limit information via the Headers property of TwitterContext, you should be aware of the new header names.  I haven’t done anything with Feature rate limit properties yet, but they appear to no longer be available – this will require more follow-up. Error Handling Twitter API v1.1 has a new format for Error Codes & Responses. LINQ to Twitter wraps these messages in the TwitterQueryException, which has been updated appropriately. The Message property of TwitterQueryException now reflects the Twitter error message, when available. There’s also a new ErrorCode that’s populated with the message error code. Parameters Most parameters stayed the same, but one of interest is Include Entities (different from LINQ to Twitter data object entities). Entities are metadata hanging off tweets, that provide start/end position in the tweet and other information for mentions, urls, hash tags, and media. Entities used to not be included unless you specified you wanted them. Now, in v1.1, entities are included by default for all APIs that return a Status.  If you were always setting IncludeEntities to true, then you won’t see a change. However, be aware that you’ll now be receiving additional data in your response from Twitter, which will explain a sudden increase in bandwidth utilization. This might or might not  matter to you  depending on the requirements of your application, but you should be aware of it. Everything Else There might be small changes here and there that I haven’t mentioned, but these were the ones you should be most aware of.  Streams didn’t change, but Twitter will be deprecating username/password authentication on public streams, in favor of OAuth, so you’ll be seeing me make that change some time in the future.  Also, Twitter will continue to evolve the API and you can expect that LINQ to Twitter will change accordingly. Summary The big changes to Twitter API were Authentication, Search, Rate Limits, and Error Handling. All API calls must be authenticated. You’ll need to change your code to read Search results differently, but the query is much the same as you use now. There’s a new RateLimits API, one of the Help queries.  Also, the new error messages are integrated into TwitterQueryException. Besides these changes, I expect  most others to be small or affect a smaller percentage of developers.  You can get the latest version of LINQ to Twitter from NuGet or visit the LINQ to Twitter download page at CodePlex.com.   @JoeMayo

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sam Drake
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • Can't see my partitions after grub recovery

    - by dimbo
    I stupidly inserted the windows CD into my dual boot Ubuntu 11.10 / windows xp system. I just wanted to see if I could install windows on my external usb HD, but didn't actually go ahead with the install. It seems like the windows CD messed up my MBR and I had to use boot-repair and the ubuntu 11.10 live CD to gain access to ubuntu again. It seems to boot up a little differently (slower) but works. However, I now cant see any of my partitions in nautilus (there are 3). When I open gparted, it just shows my whole hard drive as unallocated (I know it has a windows partition that works and my ubuntu partition that I am using now). If I insert a usb pen, it is also not visible in nautilus but in gparted shows up as a FAT32 partition (which is correct although I cannot access it). sudo fdisk -l gives the following : demian@dimbo-TP:~$ sudo fdisk -l [sudo] password for demian: omitting empty partition (5) Disk /dev/sda: 320.1 GB, 320072933376 bytes 240 heads, 63 sectors/track, 41345 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x877b877b Device Boot Start End Blocks Id System /dev/sda1 * 63 63842309 31921123+ 7 HPFS/NTFS/exFAT /dev/sda2 63844350 133484084 34819867+ 5 Extended /dev/sda3 127636488 133484084 2923798+ 82 Linux swap / Solaris /dev/sda4 133484085 625137344 245826630 83 Linux /dev/sda5 63844352 123445247 29800448 83 Linux /dev/sda6 123447296 127635455 2094080 82 Linux swap / Solaris Disk /dev/sdb: 8100 MB, 8100249600 bytes 12 heads, 40 sectors/track, 32960 cylinders, total 15820800 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc3072e18 Device Boot Start End Blocks Id System /dev/sdb1 * 5992 15820799 7907404 c W95 FAT32 (LBA) Here is my grub.conf file. Like I said before, I had to use the 'boot-repair' utility with the live cd to get grub working again. I think that this utility maybe created a new grub for me because the startup is definitely not the same. The screen goes blank for a while, and then the ubuntu loading dots come up for a brief moment (instead of during the whole startup process) before the dektop is displayed. Anyway : # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 5349ff67-b7b7-489f-a881-ae49c1dcd84a if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 5349ff67-b7b7-489f-a881-ae49c1dcd84a set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=10 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 3.0.0-12-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 5349ff67-b7b7-489f-a881-ae49c1dcd84a linux /boot/vmlinuz-3.0.0-12-generic root=UUID=5349ff67-b7b7-489f-a881-ae49c1dcd84a ro quiet splash vt.handoff=7 initrd /boot/initrd.img-3.0.0-12-generic } menuentry 'Ubuntu, with Linux 3.0.0-12-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 5349ff67-b7b7-489f-a881-ae49c1dcd84a echo 'Loading Linux 3.0.0-12-generic ...' linux /boot/vmlinuz-3.0.0-12-generic root=UUID=5349ff67-b7b7-489f-a881-ae49c1dcd84a ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.0.0-12-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 5349ff67-b7b7-489f-a881-ae49c1dcd84a linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 5349ff67-b7b7-489f-a881-ae49c1dcd84a linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Microsoft Windows XP Professional (on /dev/sda1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 72A89361A89322A1 drivemap -s (hd0) ${root} chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### How can I get things back to normal. Thanks, Demian.

    Read the article

  • Using Stored Procedures in SSIS

    - by dataintegration
    The SSIS Data Flow components: the source task and the destination task are the easiest way to transfer data in SSIS. Some data transactions do not fit this model, they are procedural tasks modeled as stored procedures. In this article we show how you can call stored procedures available in RSSBus ADO.NET Providers from SSIS. In this article we will use the CreateJob and the CreateBatch stored procedures available in RSSBus ADO.NET Provider for Salesforce, but the same steps can be used to call a stored procedure in any of our data providers. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow window. Step 3: Open the Data Flow Task and add a Script Component to the data flow pane. A dialog box will pop-up allowing you to select the Script Component Type: pick the source type as we will be outputting columns from our stored procedure. Step 4: Double click the Script Component to open the editor. Step 5: In the "Inputs and Outputs" settings, enter all the columns you want to output to the data flow. Ensure the correct data type has been set for each output. You can check the data type by selecting the output and then changing the "DataType" property from the property editor. In our example, we'll add the column JobID of type String. Step 6: Select the "Script" option in the left-hand pane and click the "Edit Script" button. This will open a new Visual Studio window with some boiler plate code in it. Step 7: In the CreateOutputRows() function you can add code that executes the stored procedures included with the Salesforce Component. In this example we will be using the CreateJob and CreateBatch stored procedures. You can find a list of the available stored procedures along with their inputs and outputs in the product help. //Configure the connection string to your credentials String connectionString = "Offline=False;user=myusername;password=mypassword;access token=mytoken;"; using (SalesforceConnection conn = new SalesforceConnection(connectionString)) { //Create the command to call the stored procedure CreateJob SalesforceCommand cmd = new SalesforceCommand("CreateJob", conn); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add(new SalesforceParameter("ObjectName", "Contact")); cmd.Parameters.Add(new SalesforceParameter("Action", "insert")); //Execute CreateJob //CreateBatch requires JobID as input so we store this value for later SalesforceDataReader rdr = cmd.ExecuteReader(); String JobID = ""; while (rdr.Read()) { JobID = (String)rdr["JobID"]; } //Create the command for CreateBatch, for this example we are adding two new rows SalesforceCommand batCmd = new SalesforceCommand("CreateBatch", conn); batCmd.CommandType = CommandType.StoredProcedure; batCmd.Parameters.Add(new SalesforceParameter("JobID", JobID)); batCmd.Parameters.Add(new SalesforceParameter("Aggregate", "<Contact><Row><FirstName>Bill</FirstName>" + "<LastName>White</LastName></Row><Row><FirstName>Bob</FirstName><LastName>Black</LastName></Row></Contact>")); //Execute CreateBatch SalesforceDataReader batRdr = batCmd.ExecuteReader(); } Step 7b: If you had specified output columns earlier, you can now add data into them using the UserComponent Output0Buffer. For example, we had set an output column called JobID of type String so now we can set a value for it. We will modify the DataReader that contains the output of CreateJob like so:. while (rdr.Read()) { Output0Buffer.AddRow(); JobID = (String)rdr["JobID"]; Output0Buffer.JobID = JobID; } Step 8: Note: You will need to modify the connection string to include your credentials. Also ensure that the System.Data.RSSBus.Salesforce assembly is referenced and include the following using statements to the top of the class: using System.Data; using System.Data.RSSBus.Salesforce; Step 9: Once you are done editing your script, save it, and close the window. Click OK in the Script Transformation window to go back to the main pane. Step 10: If had any outputs from the Script Component you can use them in your data flow. For example we will use a Flat File Destination. Configure the Flat File Destination to output the results to a file, and you should see the JobId in the file. Step 11: Your project should be ready to run.

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sherry LaMonica
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • Where would a spam bot be located?

    - by Tim
    I have a hosted website using a free hosting service, I received an email this afternoon saying that I have been suspended because my account has been compromised. Basically, someone is using my email account to mass send spam. I've changed all the passwords and everything but when my Gmail pulls the emails from the host it's still downloading loads of spam messages that show like this: This message was created automatically by mail delivery software. A message that you sent could not be delivered to one or more of its recipients. This is a permanent error. The following address(es) failed: [email protected] SMTP error from remote mail server after end of data: host 198.91.80.251 [198.91.80.251]: 554 5.6.0 id=23634-03 - Rejected by MTA on relaying, from MTA([127.0.0.1]:10030): 554 Error: This email address has lost rights to send email from the system ------ This is a copy of the message, including all the headers. ------ Return-path: <[email protected]> Received: from keenesystems.com ([66.135.33.211]:2370 helo=server211) by absolut.x10hosting.com with esmtpsa (TLSv1:RC4-MD5:128) (Exim 4.77) (envelope-from <[email protected]>) id 1TGwSW-002hHe-Lc for [email protected]; Wed, 26 Sep 2012 13:35:44 -0500 MIME-Version: 1.0 Date: Wed, 26 Sep 2012 13:35:43 -0500 X-Priority: 3 (Normal) X-Mailer: Ximian Evolution 3.9.9 (8.5.3-6) Subject: New staff members wanted at Auction It Online From: [email protected] Reply-To: [email protected] To: "Nadia Monti" <[email protected]> Content-Type: text/plain Content-Transfer-Encoding: quoted-printable Message-ID: <OUTLOOK-IDM-9aed7054-6a3e-e1a4-1d5c-3e73377652a6@server211> Date : 26 September 2012=0ATime : 13:35=0ASender : Dennise Halcomb Head = Office Manager of RJ Auction Drop-Off Int.=0A=0ANice to meet you Nadia M= onti=0A=0ARJ ADO Ltd., a USA based company, offers a significant amount = of goods worldwide for our customers on eBay and other auction venues. = Our company's main target is to provide a suitable and cost-effective se= rvice for any person, company or fundraising company. The main purpose o= f the administrative assistant / sales support representative is to cont= ribute to the sales force and add convenience to our cost-effective serv= ice dedicated to individuals, businesses, and organizations worldwide. O= ur HR department obtained your resume from one of the various job-orient= ed websites just to offer you this post.=0A=0AWorking Schedule: This is = a part time and home-based offer. You won't need to spend more than 3 ho= urs each day. Your =0Aschedule will be flexible.=0A=0ASalary: At the end= of the trial period (it lasts for 1 month) you will be paid 1,800 EUR. = With the average volume of clients your overall income will raise up to = 3,000 EUR per month. After the trial period is over your base salary wil= l grow up to 2,500 EUR per month, so you will earn 5% commission from th= e transactions completed.=0A=0AWhere?: Italy Wide. As it is a stay at ho= me position all the communication will be carried out via email and via = phone.=0A=0ARequirements: Access to the internet during the workday and = basic microsoft office skills are needed. Basic knowledge of English is = required (most of the contacts will be in English).=0A=0ACosts and Fees:= There are NO costs at any time for our employees. All fees related to t= his position are covered by the RJ ADO Co. Ltd..=0A=0AFurther Hiring Pro= cess: If you are interested in position we offer, please reply to this e= mail and send us the copy of your resume for verification.=0A=0AAfter re= viewing all of the received applications we will reply to successful app= licants only. Then we'll offer to these successful applicants a position= within our firm on a trial period basis for one month beginning from th= e date you sign a trial agreement. During this trial period you will rec= eive full guidance and support. Employees on a one monthly trial period = are evaluated at least one week prior to the end of their trial. During = the trial, your supervisor can recommend termination. At the end of the = trial period, the supervisor can offer continued employment, extension o= f trial period, or termination. After the trial period you may ask for m= ore hours or continue full-time.=0A=0AIf you are interested in this posi= tion, just reply to this email and send any questions you have and the c= opy of your resume for verification.=0A=0AThank You,=0AHR-Manager of RJ = ADO Co. Ltd.=0A=0APermission Settings=0AYou have been referred to RJ Auc= tion Drop-Off If you feel you received this email in error or do not wis= h to receive future messages, please reply to this message with "remove"= in the subject field. We will immediately update our database according= ly. =0AWe apologize for any inconvenience caused.=0A=0ARJ Auction Drop-O= ff Co. Ltd. I'm not aware of how this has happened. I'm not sure how anyone could have got hold of my password. It's a simple wordpress install, at some point recently my host went down and there was a fresh install of wordpress with default admin accounts, I have a feeling it could be something to do with this. My question is, even though I've changed all my passwords it's all still happening, is there annywhere in paticular this script would be stored on my host. I really can't deal with having my hosting account suspended and my email account sending all this spam.

    Read the article

  • Taking the Plunge - or Dipping Your Toe - into the Fluffy IAM Cloud by Paul Dhanjal (Simeio Solutions)

    - by Greg Jensen
    In our last three posts, we’ve examined the revolution that’s occurring today in identity and access management (IAM). We looked at the business drivers behind the growth of cloud-based IAM, the shortcomings of the old, last-century IAM models, and the new opportunities that federation, identity hubs and other new cloud capabilities can provide by changing the way you interact with everyone who does business with you. In this, our final post in the series, we’ll cover the key things you, the enterprise architect, should keep in mind when considering moving IAM to the cloud. Invariably, what starts the consideration process is a burning business need: a compliance requirement, security vulnerability or belt-tightening edict. Many on the business side view IAM as the “silver bullet” – and for good reason. You can almost always devise a solution using some aspect of IAM. The most critical question to ask first when using IAM to address the business need is, simply: is my solution complete? Typically, “business” is not focused on the big picture. Understandably, they’re focused instead on the need at hand: Can we be HIPAA compliant in 6 months? Can we tighten our new hire, employee transfer and termination processes? What can we do to prevent another password breach? Can we reduce our service center costs by the end of next quarter? The business may not be focused on the complete set of services offered by IAM but rather a single aspect or two. But it is the job – indeed the duty – of the enterprise architect to ensure that all aspects are being met. It’s like remodeling a house but failing to consider the impact on the foundation, the furnace or the zoning or setback requirements. While the homeowners may not be thinking of such things, the architect, of course, must. At Simeio Solutions, the way we ensure that all aspects are being taken into account – to expose any gaps or weaknesses – is to assess our client’s IAM capabilities against a five-step maturity model ranging from “ad hoc” to “optimized.” The model we use is similar to Capability Maturity Model Integration (CMMI) developed by the Software Engineering Institute (SEI) at Carnegie Mellon University. It’s based upon some simple criteria, which can provide a visual representation of how well our clients fair when evaluated against four core categories: ·         Program Governance ·         Access Management (e.g., Single Sign-On) ·         Identity and Access Governance (e.g., Identity Intelligence) ·         Enterprise Security (e.g., DLP and SIEM) Often our clients believe they have a solution with all the bases covered, but the model exposes the gaps or weaknesses. The gaps are ideal opportunities for the cloud to enter into the conversation. The complete process is straightforward: 1.    Look at the big picture, not just the immediate need – what is our roadmap and how does this solution fit? 2.    Determine where you stand with respect to the four core areas – what are the gaps? 3.    Decide how to cover the gaps – what role can the cloud play? Returning to our home remodeling analogy, at some point, if gaps or weaknesses are discovered when evaluating the complete impact of the proposed remodel – if the existing foundation wouldn’t support the new addition, for example – the owners need to decide if it’s time to move to a new house instead of trying to remodel the old one. However, with IAM it’s not an either-or proposition – i.e., either move to the cloud or fix the existing infrastructure. It’s possible to use new cloud technologies just to cover the gaps. Many of our clients start their migration to the cloud this way, dipping in their toe instead of taking the plunge all at once. Because our cloud services offering is based on the Oracle Identity and Access Management Suite, we can offer a tremendous amount of flexibility in this regard. The Oracle platform is not a collection of point solutions, but rather a complete, integrated, best-of-breed suite. Yet it’s not an all-or-nothing proposition. You can choose just the features and capabilities you need using a pay-as-you-go model, incrementally turning on and off services as needed. Better still, all the other capabilities are there, at the ready, whenever you need them. Spooling up these cloud-only services takes just a fraction of the time it would take a typical organization to deploy internally. SLAs in the cloud may be higher than on premise, too. And by using a suite of software that’s complete and integrated, you can dramatically lower cost and complexity. If your in-house solution cannot be migrated to the cloud, you might consider using hardware appliances such as Simeio’s Cloud Interceptor to extend your enterprise out into the network. You might also consider using Expert Managed Services. Cost is usually the key factor – not just development costs but also operational sustainment costs. Talent or resourcing issues often come into play when thinking about sustaining a program. Expert Managed Services such as those we offer at Simeio can address those concerns head on. In a cloud offering, identity and access services lend to the new paradigms described in my previous posts. Most importantly, it allows us all to focus on what we're meant to do – provide value, lower costs and increase security to our respective organizations. It’s that magic “silver bullet” that business knew you had all along. If you’d like to talk more, you can find us at simeiosolutions.com.

    Read the article

  • Simple BizTalk Orchestration & Port Tutorial

    - by bosuch
    (This is a reference for a lunch & learn I'm giving at my company) This demo will create a BizTalk process that monitors a directory for an XML file, loads it into an orchestration, and drops it into a different directory. There’s no real processing going on (other than moving the file from one location to another), but this will introduce you to Messages, Orchestrations and Ports. To begin, create a new BizTalk Project names OrchestrationPortDemo: When the solution has been created, right-click the OrchestrationPortDemo solution name and select Add -> New Item. Add a BizTalk Orchestration named DemoOrchestration: Click Add and the orchestration will be created and displayed in the BizTalk Orchestration Designer. The designer allows you to visually create your business processes: Next, you will add a message (the basic unit of communication) to the orchestration. In the Orchestration View, right-click Messages and select New Message. In the message properties window, enter DemoMessage as the Identifier (the name), and select .NET Classes -> System.Xml.XmlDocument for Message Type. This indicates that we’ll be passing a standard Xml document in and out of the orchestration. Next, you will add Send and Receive shapes to the orchestration. From the toolbox, drag a Receive shape onto the orchestration (where it says “Drop a shape from the toolbox here”). Next, drag a Send shape directly below the Receive shape. For the properties of both shapes, select DemoMessage for Message – this indicates we’ll be passing around the message we created earlier. The Operation box will have a red exclamation mark next to it because no port has been specified. We will do this in a minute. On the Receive shape properties, you must be sure to select True for Activate. This indicates that the orchestration will be started upon receipt of a message, rather than being called by another orchestration. If you leave it set to false, when you try to build the application you’ll receive the error “You must specify at least one already-initialized correlation set for a non-activation receive that is on a non self-correlating port.” Now you’ll add ports to the orchestration. Ports specify how your orchestration will send and receive messages. Drag a port from the toolbox to the left-hand Port Surface, and the Port Configuration Wizard launches. For the first port (the receive port), enter the following information: Name: ReceivePort Select the port type to be used for this port: Create a new Port Type Port Type Name: ReceivePortType Port direction of communication: I’ll always be receiving <…> Port binding: Specify later By choosing “Specify later” you are choosing to bind the port (choose where and how it will send or receive its messages) at deployment time via the BizTalk Server Administration console. This allows you to change locations later without building and re-deploying the application. Next, drag a port to the right-hand Port Surface; this will be your send port. Configure it as follows: Name: SendPort Select the port type to be used for this port: Create a new Port Type Port Type Name: SendPortType Port direction of communication: I’ll always be sending <…> Port binding: Specify later Finally, drag the green arrow on the ReceivePort to the Receive_1 shape, and the green arrow on the SendPort to the Send_1 shape. Your orchestration should look like this: Now you have a couple final steps before building and deploying the application. In the Solution Explorer, right-click on OrchestrationPortDemo and select Properties. On the Signing tab, click “Sign the assembly”, and choose <New…> from the drop-down. Enter DemoKey as the Key file name, and deselect “Protect my key file with a password”. This will create the file DemoKey.snk in your solution. Signing the assembly gives it a strong name so that it can be deployed into the global assembly cache (GAC). Next, click the Deployment tab, and enter OrchestrationPortDemo as the Application Name. Save your solution. Click “Build OrchestrationPortDemo”. Your solution should (hopefully!) build with no errors. Click “Deploy OrchestrationPortDemo”. (Note – If you’re running Server 2008, Vista or Win7, you may get an error message. If so, close Visual Studio and run it as an administrator) That’s it! Your application is ready to be configured and fired up in the BizTalk Server Administration console, so stay tuned!

    Read the article

  • spring security : Failed to load ApplicationContext with pre-post-annotations="enabled"

    - by thogau
    I am using spring 3.0.1 + spring-security 3.0.2 and I am trying to use features like @PreAuthorize and @PostFilter annotations. When running in units tests using @RunWith(SpringJUnit4ClassRunner.class) or in a main(String[] args) method my application context fails to start if enable pre-post-annotations and use org.springframework.security.acls.AclPermissionEvaluator : <!-- Enable method level security--> <security:global-method-security pre-post-annotations="enabled"> <security:expression-handler ref="expressionHandler"/> </security:global-method-security> <bean id="expressionHandler" class="org.springframework.security.access.expression.method.DefaultMethodSecurityExpressionHandler"> <property name="permissionEvaluator" ref="aclPermissionEvaluator"/> </bean> <bean id="aclPermissionEvaluator" class="org.springframework.security.acls.AclPermissionEvaluator"> <constructor-arg ref="aclService"/> </bean> <!-- Enable stereotype support --> <context:annotation-config /> <context:component-scan base-package="com.rreps.core" /> <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="locations"> <list> <value>classpath:applicationContext.properties</value> </list> </property> </bean> <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"> <property name="driverClass" value="${jdbc.driver}" /> <property name="jdbcUrl" value="${jdbc.url}" /> <property name="user" value="${jdbc.username}" /> <property name="password" value="${jdbc.password}" /> <property name="initialPoolSize" value="10" /> <property name="minPoolSize" value="5" /> <property name="maxPoolSize" value="25" /> <property name="acquireRetryAttempts" value="10" /> <property name="acquireIncrement" value="5" /> <property name="idleConnectionTestPeriod" value="3600" /> <property name="maxIdleTime" value="10800" /> <property name="maxConnectionAge" value="14400" /> <property name="preferredTestQuery" value="SELECT 1;" /> <property name="testConnectionOnCheckin" value="false" /> </bean> <bean id="auditedSessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="configLocation" value="classpath:hibernate.cfg.xml" /> <property name="hibernateProperties"> <value> hibernate.dialect=${hibernate.dialect} hibernate.query.substitutions=true 'Y', false 'N' hibernate.cache.use_second_level_cache=true hibernate.cache.provider_class=net.sf.ehcache.hibernate.SingletonEhCacheProvider hibernate.hbm2ddl.auto=update hibernate.c3p0.acquire_increment=5 hibernate.c3p0.idle_test_period=3600 hibernate.c3p0.timeout=10800 hibernate.c3p0.max_size=25 hibernate.c3p0.min_size=1 hibernate.show_sql=false hibernate.validator.autoregister_listeners=false </value> </property> <!-- validation is performed by "hand" (see http://opensource.atlassian.com/projects/hibernate/browse/HV-281) <property name="eventListeners"> <map> <entry key="pre-insert" value-ref="beanValidationEventListener" /> <entry key="pre-update" value-ref="beanValidationEventListener" /> </map> </property> --> <property name="entityInterceptor"> <bean class="com.rreps.core.dao.hibernate.interceptor.TrackingInterceptor" /> </property> </bean> <bean id="simpleSessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="configLocation" value="classpath:hibernate.cfg.xml" /> <property name="hibernateProperties"> <value> hibernate.dialect=${hibernate.dialect} hibernate.query.substitutions=true 'Y', false 'N' hibernate.cache.use_second_level_cache=true hibernate.cache.provider_class=net.sf.ehcache.hibernate.SingletonEhCacheProvider hibernate.hbm2ddl.auto=update hibernate.c3p0.acquire_increment=5 hibernate.c3p0.idle_test_period=3600 hibernate.c3p0.timeout=10800 hibernate.c3p0.max_size=25 hibernate.c3p0.min_size=1 hibernate.show_sql=false hibernate.validator.autoregister_listeners=false </value> </property> <!-- property name="eventListeners"> <map> <entry key="pre-insert" value-ref="beanValidationEventListener" /> <entry key="pre-update" value-ref="beanValidationEventListener" /> </map> </property--> </bean> <bean id="sequenceSessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="configLocation" value="classpath:hibernate.cfg.xml" /> <property name="hibernateProperties"> <value> hibernate.dialect=${hibernate.dialect} hibernate.query.substitutions=true 'Y', false 'N' hibernate.cache.use_second_level_cache=true hibernate.cache.provider_class=net.sf.ehcache.hibernate.SingletonEhCacheProvider hibernate.hbm2ddl.auto=update hibernate.c3p0.acquire_increment=5 hibernate.c3p0.idle_test_period=3600 hibernate.c3p0.timeout=10800 hibernate.c3p0.max_size=25 hibernate.c3p0.min_size=1 hibernate.show_sql=false hibernate.validator.autoregister_listeners=false </value> </property> </bean> <bean id="validationFactory" class="javax.validation.Validation" factory-method="buildDefaultValidatorFactory" /> <!-- bean id="beanValidationEventListener" class="org.hibernate.cfg.beanvalidation.BeanValidationEventListener"> <constructor-arg index="0" ref="validationFactory" /> <constructor-arg index="1"> <props/> </constructor-arg> </bean--> <!-- Enable @Transactional support --> <tx:annotation-driven transaction-manager="transactionManager"/> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="auditedSessionFactory" /> </bean> <security:authentication-manager alias="authenticationManager"> <security:authentication-provider user-service-ref="userDetailsService" /> </security:authentication-manager> <bean id="userDetailsService" class="com.rreps.core.service.impl.UserDetailsServiceImpl" /> <!-- ACL stuff --> <bean id="aclCache" class="org.springframework.security.acls.domain.EhCacheBasedAclCache"> <constructor-arg> <bean class="org.springframework.cache.ehcache.EhCacheFactoryBean"> <property name="cacheManager"> <bean class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean"/> </property> <property name="cacheName" value="aclCache"/> </bean> </constructor-arg> </bean> <bean id="lookupStrategy" class="org.springframework.security.acls.jdbc.BasicLookupStrategy"> <constructor-arg ref="dataSource"/> <constructor-arg ref="aclCache"/> <constructor-arg> <bean class="org.springframework.security.acls.domain.AclAuthorizationStrategyImpl"> <constructor-arg> <list> <bean class="org.springframework.security.core.authority.GrantedAuthorityImpl"> <constructor-arg value="ROLE_ADMINISTRATEUR"/> </bean> <bean class="org.springframework.security.core.authority.GrantedAuthorityImpl"> <constructor-arg value="ROLE_ADMINISTRATEUR"/> </bean> <bean class="org.springframework.security.core.authority.GrantedAuthorityImpl"> <constructor-arg value="ROLE_ADMINISTRATEUR"/> </bean> </list> </constructor-arg> </bean> </constructor-arg> <constructor-arg> <bean class="org.springframework.security.acls.domain.ConsoleAuditLogger"/> </constructor-arg> </bean> <bean id="aclService" class="com.rreps.core.service.impl.MysqlJdbcMutableAclService"> <constructor-arg ref="dataSource"/> <constructor-arg ref="lookupStrategy"/> <constructor-arg ref="aclCache"/> </bean> The strange thing is that the context starts normally when deployed in a webapp and @PreAuthorize and @PostFilter annotations are working fine as well... Any idea what is wrong? Here is the end of the stacktrace : ... 55 more Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dataSource' defined in class path resource [applicationContext-core.xml]: Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.transaction.config.internalTransactionAdvisor': Cannot resolve reference to bean 'org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0' while setting bean property 'transactionAttributeSource'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0': Initialization of bean failed; nested exception is java.lang.NullPointerException at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:521) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:450) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:287) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:189) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:322) ... 67 more Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.transaction.config.internalTransactionAdvisor': Cannot resolve reference to bean 'org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0' while setting bean property 'transactionAttributeSource'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0': Initialization of bean failed; nested exception is java.lang.NullPointerException at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:328) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:106) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1308) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1067) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:511) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:450) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:287) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193) at org.springframework.aop.framework.autoproxy.BeanFactoryAdvisorRetrievalHelper.findAdvisorBeans(BeanFactoryAdvisorRetrievalHelper.java:86) at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findCandidateAdvisors(AbstractAdvisorAutoProxyCreator.java:100) at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findEligibleAdvisors(AbstractAdvisorAutoProxyCreator.java:86) at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.getAdvicesAndAdvisorsForBean(AbstractAdvisorAutoProxyCreator.java:68) at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.wrapIfNecessary(AbstractAutoProxyCreator.java:359) at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.postProcessAfterInitialization(AbstractAutoProxyCreator.java:322) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:404) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1409) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513) ... 73 more Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0': Initialization of bean failed; nested exception is java.lang.NullPointerException at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:521) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:450) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:287) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:189) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:322) ... 91 more Caused by: java.lang.NullPointerException at org.springframework.security.access.method.DelegatingMethodSecurityMetadataSource.getAttributes(DelegatingMethodSecurityMetadataSource.java:52) at org.springframework.security.access.intercept.aopalliance.MethodSecurityMetadataSourceAdvisor$MethodSecurityMetadataSourcePointcut.matches(MethodSecurityMetadataSourceAdvisor.java:129) at org.springframework.aop.support.AopUtils.canApply(AopUtils.java:215) at org.springframework.aop.support.AopUtils.canApply(AopUtils.java:252) at org.springframework.aop.support.AopUtils.findAdvisorsThatCanApply(AopUtils.java:284) at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findAdvisorsThatCanApply(AbstractAdvisorAutoProxyCreator.java:117) at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findEligibleAdvisors(AbstractAdvisorAutoProxyCreator.java:87) at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.getAdvicesAndAdvisorsForBean(AbstractAdvisorAutoProxyCreator.java:68) at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.wrapIfNecessary(AbstractAutoProxyCreator.java:359) at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.postProcessAfterInitialization(AbstractAutoProxyCreator.java:322) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:404) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1409) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513) ... 97 more

    Read the article

  • JPA exception: Object: ... is not a known entity type.

    - by Toto
    I'm new to JPA and I'm having problems with the autogeneration of primary key values. I have the following entity: package jpatest.entities; import java.io.Serializable; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; @Entity public class MyEntity implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; public Long getId() { return id; } public void setId(Long id) { this.id = id; } private String someProperty; public String getSomeProperty() { return someProperty; } public void setSomeProperty(String someProperty) { this.someProperty = someProperty; } public MyEntity() { } public MyEntity(String someProperty) { this.someProperty = someProperty; } @Override public String toString() { return "jpatest.entities.MyEntity[id=" + id + "]"; } } and the following main method in other class: public static void main(String[] args) { EntityManagerFactory emf = Persistence.createEntityManagerFactory("JPATestPU"); EntityManager em = emf.createEntityManager(); em.getTransaction().begin(); MyEntity e = new MyEntity("some value"); em.persist(e); /* (exception thrown here) */ em.getTransaction().commit(); em.close(); emf.close(); } This is my persistence unit: <?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"> <persistence-unit name="JPATestPU" transaction-type="RESOURCE_LOCAL"> <provider>oracle.toplink.essentials.PersistenceProvider</provider> <class>jpatest.entities.MyEntity</class> <properties> <property name="toplink.jdbc.user" value="..."/> <property name="toplink.jdbc.password" value="..."/> <property name="toplink.jdbc.url" value="jdbc:mysql://localhost:3306/jpatest"/> <property name="toplink.jdbc.driver" value="com.mysql.jdbc.Driver"/> <property name="toplink.ddl-generation" value="create-tables"/> </properties> </persistence-unit> </persistence> When I execute the program I get the following exception in the line marked with the proper comment: Exception in thread "main" java.lang.IllegalArgumentException: Object: jpatest.entities.MyEntity[id=null] is not a known entity type. at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.registerNewObjectForPersist(UnitOfWorkImpl.java:3212) at oracle.toplink.essentials.internal.ejb.cmp3.base.EntityManagerImpl.persist(EntityManagerImpl.java:205) at jpatest.Main.main(Main.java:...) What am I missing?

    Read the article

  • Blocking access to websites with objective-C / root privileges in objective-C

    - by kvaruni
    I am writing a program in Objective-C (XCode 3.2, on Snow Leopard) that is capable of either selectively blocking certain sites for a duration or only allow certain sites (and thus block all others) for a duration. The reasoning behind this program is rather simple. I tend to get distracted when I have full internet access, but I do need internet access during my working hours to get to a number of work-related websites. Clearly, this is not a permanent block, but only helps me to focus whenever I find myself wandering a bit too much. At the moment, I am using a Unix script that is called via AppleScript to obtain Administrator permissions. It then activates a number of ipfw rules and clears those after a specific duration to restore full internet access. Simple and effective, but since I am running as a standard user, it gets cumbersome to enter my administrator password each and every time I want to go "offline". Furthermore, this is a great opportunity to learn to work with XCode and Objective-C. At the moment, everything works as expected, minus the actual blocking. I can add a number of sites in a list, specify whether or not I want to block or allow these websites and I can "start" the blocking by specifying a time until which I want to stay "offline". However, I find it hard to obtain clear information on how I can run a privileged Unix command from Objective-C. Ideally, I would like to be able to store information with respect to the Administrator account into the Keychain to use these later on, so that I can simply move into "offline" mode with the convenience of clicking a button. Even more ideally, there might be some class in Objective-C with which I can block access to some/all websites for this particular user without needing to rely on privileged Unix commands. A third possibility is in starting this program with root permissions and the reducing the permissions until I need them, but since this is a GUI application that is nested in the menu bar of OS X, the results are rather awkward and getting it to run each and every time with root permission is no easy task. Anyone who can offer me some pointers or advice? Please, no security-warnings, I am fully aware that what I want to do is a potential security threat.

    Read the article

  • Disable Windows 8 edge gestures/hot corners for multi-touch applications while running in full screen

    - by Bondye
    I have a full screen AS3 game maby with Adobe AIR that runs in Windows 7. In this game it may not be easy to exit (think about kiosk mode, only exit by pressing esc and enter a password). Now I want this game to run in Windows 8. The game is working like expected but the anoying things are these edge gestures/hot corners (left, top, right, bottom) and the shortcuts. I've read articles but none helped me. People talk about registery edits, but I dont get this working + the user needs to restart his/hers computer. I want to open my game, turn off gestures/hot corners and when the game closes the gestures/hot corners need to come back available again. I have seen some applications doing the same what I want to accomplish. I found this so I am able to detect the gestures. But how to ignore they're actions? I also read ASUS Smart Gestures but this is for the touch-pad. And I have tried Classic Shell but I need to disable the edge gestures/hot corners without such programs, just on-the-fly. I also found this but I don't know how to implement this. HRESULT SetTouchDisableProperty(HWND hwnd, BOOL fDisableTouch) { IPropertyStore* pPropStore; HRESULT hrReturnValue = SHGetPropertyStoreForWindow(hwnd, IID_PPV_ARGS(&pPropStore)); if (SUCCEEDED(hrReturnValue)) { PROPVARIANT var; var.vt = VT_BOOL; var.boolVal = fDisableTouch ? VARIANT_TRUE : VARIANT_FALSE; hrReturnValue = pPropStore->SetValue(PKEY_EdgeGesture_DisableTouchWhenFullscreen, var); pPropStore->Release(); } return hrReturnValue; } Does anyone know how I can do this? Or point me into the right direction? I have tried some in C# and C++, but I aint a skilled C#/C++ developer. Also the game is made in AS3 so it will be hard to implement this in C#/C++. I work on the Lenovo aio (All in one) with Windows 8.

    Read the article

  • Error deploying web application on Weblogic 10.3 using maven 2: "Can't find wsdl /wsdls/wsat.wsdl"

    - by Marcos Carceles
    Hi, I'm using maven for deploying a web application in my Weblogic 10.3 server remotely. I created my pom file based on the indication on this previous question: Using maven as build tool for Weblogic 10.3 My pom.xml file is: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.balfourbeatty.horizon.maven.test</groupId> <artifactId>maven-test-webapp</artifactId> <packaging>war</packaging> <version>1.0-SNAPSHOT</version> <name>maven-test-webapp Maven Webapp</name> <url>http://maven.apache.org</url> <properties> <weblogic.version>10.3</weblogic.version> </properties> <build> <plugins> <plugin> <groupId>org.apache.myfaces.trinidadbuild</groupId> <artifactId>maven-jdev-plugin</artifactId> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>weblogic-maven-plugin</artifactId> <version>2.9.1</version> <configuration> <name>maven-test-webapp</name> <adminServerHostName>******************</adminServerHostName> <adminServerPort>****</adminServerPort> <adminServerProtocol>t3</adminServerProtocol> <userId>******</userId> <password>*****</password> <upload>true</upload> <remote>true</remote> <verbose>true</verbose> <debug>true</debug> <targetNames>WLS_Spaces</targetNames> <noExit>true</noExit> <projectPackaging>war</projectPackaging> </configuration> <dependencies> <dependency> <groupId>com.sun</groupId> <artifactId>tools</artifactId> <version>1.6</version> <scope>system</scope> <systemPath>${java.home}/../lib/tools.jar</systemPath> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>weblogic</artifactId> <version>${weblogic.version}</version> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>webservices</artifactId> <version>${weblogic.version}</version> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.utils.full</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.i18n</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.rmi.client</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>javax.enterprise.deploy</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>webserviceclient</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.security.wls</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.security.identity</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.security</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>wlclient</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.transaction</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.utils.classloaders</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>wljmsclient</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.management.core</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>wls-api</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.descriptor</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.logging</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.socket.api</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.security.digest</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.workmanager</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.lifecycle</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.utils.wrapper</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>wlsafclient</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.management.jmx</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.descriptor.wl</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>javax.mail</artifactId> <version>10.3</version> </dependency> </dependencies> </plugin> </plugins> <finalName>maven-test-webapp</finalName> </build> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <dependency> <groupId>org.codehaus.mojo</groupId> <artifactId>weblogic-maven-plugin</artifactId> <version>2.9.1</version> </dependency> </dependencies> <distributionManagement> <!-- use the following if you're not using a snapshot version. --> <repository> <id>internal</id> <name>Archiva Managed Internal Repository</name> <url>http://localhost:8180/archiva/repository/internal</url> </repository> <!-- use the following if you ARE using a snapshot version. --> <snapshotRepository> <id>snapshots</id> <name>Archiva Managed Snapshot Repository</name> <url>http://localhost:8180/archiva/repository/snapshots</url> </snapshotRepository> </distributionManagement> </project> All the dependencies are already resolved properly, as they are in the local archiva repository. The application does not contain any web-service, being just a "hello world" application. /index.jsp /WEB-INF/web.xml The error I get is: [BasicOperation.execute():423] : Initiating deploy operation for app, maven-test-webapp, on targets: [BasicOperation.execute():425] : WLS_Spaces Task 14 initiated: [Deployer:149026]deploy application maven-test-webapp on WLS_Spaces. dumping Exception stack Task 14 failed: [Deployer:149026]deploy application maven-test-webapp on WLS_Spaces. Target state: deploy failed on Server WLS_Spaces weblogic.wsee.ws.WsException: When processing WebService module 'maven-test-webapp.war'. Can't find wsdl /wsdls/wsat.wsdl at weblogic.wsee.deploy.WSEEWebModule.loadWsdlDefinitions(WSEEWebModule.java:159) at weblogic.wsee.deploy.WSEEModule.loadWsdl(WSEEModule.java:334) at weblogic.wsee.deploy.WSEEAnnotationProcessor.isWsdlHasPolicy(WSEEAnnotationProcessor.java:312) at weblogic.wsee.deploy.WSEEAnnotationProcessor.process(WSEEAnnotationProcessor.java:91) at weblogic.wsee.deploy.WSEEAnnotationProcessor.process(WSEEAnnotationProcessor.java:51) at weblogic.wsee.deploy.WSEEModule.prepare(WSEEModule.java:102) at weblogic.wsee.deploy.ServletDeployListener.contextPrepared(ServletDeployListener.java:26) at weblogic.servlet.internal.EventsManager$FireContextPreparedAction.run(EventsManager.java:503) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) at weblogic.servlet.internal.EventsManager.notifyContextPreparedEvent(EventsManager.java:162) at weblogic.servlet.internal.WebAppServletContext.initContextListeners(WebAppServletContext.java:1782) at weblogic.servlet.internal.WebAppServletContext.prepare(WebAppServletContext.java:1136) at weblogic.servlet.internal.HttpServer.doPostContextInit(HttpServer.java:449) at weblogic.servlet.internal.HttpServer.loadWebApp(HttpServer.java:424) at weblogic.servlet.internal.WebAppModule.registerWebApp(WebAppModule.java:924) at weblogic.servlet.internal.WebAppModule.prepare(WebAppModule.java:356) at weblogic.application.internal.flow.ScopedModuleDriver.prepare(ScopedModuleDriver.java:176) at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:199) at weblogic.application.internal.flow.DeploymentCallbackFlow$1.next(DeploymentCallbackFlow.java:391) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:83) at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:59) at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:43) at weblogic.application.internal.BaseDeployment$1.next(BaseDeployment.java:1221) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:83) at weblogic.application.internal.BaseDeployment.prepare(BaseDeployment.java:367) at weblogic.application.internal.SingleModuleDeployment.prepare(SingleModuleDeployment.java:39) at weblogic.application.internal.DeploymentStateChecker.prepare(DeploymentStateChecker.java:154) at weblogic.deploy.internal.targetserver.AppContainerInvoker.prepare(AppContainerInvoker.java:60) at weblogic.deploy.internal.targetserver.operations.ActivateOperation.createAndPrepareContainer(ActivateOperation.java:207) at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doPrepare(ActivateOperation.java:98) at weblogic.deploy.internal.targetserver.operations.AbstractOperation.prepare(AbstractOperation.java:217) at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentPrepare(DeploymentManager.java:747) at weblogic.deploy.internal.targetserver.DeploymentManager.prepareDeploymentList(DeploymentManager.java:1216) at weblogic.deploy.internal.targetserver.DeploymentManager.handlePrepare(DeploymentManager.java:250) at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.prepare(DeploymentServiceDispatcher.java:159) at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doPrepareCallback(DeploymentReceiverCallbackDeliverer.java:157) at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$000(DeploymentReceiverCallbackDeliverer.java:12) at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$1.run(DeploymentReceiverCallbackDeliverer.java:45) at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) Does anyone have any idea on what could the problem be? Many thanks!

    Read the article

  • How to send email from an EC2 instance using GoDaddy's SMTP server?

    - by Matt Greer
    SMTP is a whole new ballgame for me, but I am reading up on it. I am attempting to send email from my EC2 instance using GoDaddy's SMTP server. My domain name is registered through GoDaddy and I have 2 email accounts with them. I can successfully send the email from my dev box no problem. my web.config <system.net> <mailSettings> <smtp from="[email protected]" deliveryMethod="Network"> <network host="smtpout.secureserver.net" clientDomain="mydomain.com" port="25" userName="[email protected]" password="mypassword" defaultCredentials="false" /> </smtp> </mailSettings> </system.net> In my ASP.NET app: MailMessage mailMessage = new MailMessage("[email protected]", recipientEmail, emailSubject, body); mailMessage.IsBodyHtml = false; SmtpClient mailClient = new SmtpClient(); mailClient.Send(mailMessage); Very typical, simple use of System.Net.Mail.SmtpClient. The mail client is picking up the settings from my web.config as expected. From the EC2 instance, the same setup yields: System.Net.Mail.SmtpException: Failure sending mail. ---> System.IO.IOException: Unable to read data from the transport connection: net_io_connectionclosed. at System.Net.Mail.SmtpReplyReaderFactory.ProcessRead(Byte[] buffer, Int32 offset, Int32 read, Boolean readLine) at System.Net.Mail.SmtpReplyReaderFactory.ReadLines(SmtpReplyReader caller, Boolean oneLine) at System.Net.Mail.SmtpReplyReaderFactory.ReadLine(SmtpReplyReader caller) at System.Net.Mail.SmtpConnection.GetConnection(ServicePoint servicePoint) at System.Net.Mail.SmtpClient.Send(MailMessage message) --- End of inner exception stack trace --- I have searched high and low and not found anyone else attempting this. All GoDaddy smtp situations I have found involve people being hosted by GoDaddy using their relay server. Some more info: My EC2 instance is Windows Server 2008 with IIS 7. The app is running in .NET 4 I can successfully use Gmail's SMTP server on the EC2 instance by using their port, setting SmtpClient.EnableSsl to true, and sending the mail through a gmail account. But we want to send the email from an account on our domain. I have port 25 open on both the Windows firewall and Amazon's Security group based firewall. I have played with Wireshark and noticed my SMTP related traffic was talking to ports in the 5,000s, so out of desperation I opened them all up to no avail (then closed them back down) As far as I know my EC2 instance's IP address is not black listed by GoDaddy. I have a feeling I'm just missing something fundamental. I also have a feeling someone is going to recommend I use AuthSmtp or something similar, I'll agree, and have had wasted the past 6 hours :)

    Read the article

  • DirectoryServicesCOMException when working with System.DirectoryServices.AccountManagement

    - by antik
    I'm attempting to determine whether a user is a member of a given group using System.DirectoryServices.AccountManagment. I'm doing this inside a SharePoint WebPart in SharePoint 2007 on a 64-bit system. Project targets .NET 3.5 Impersonation is enabled in the web.config. The IIS Site in question is using an IIS App Pool with a domain user configured as the identity. I am able to instantiate a PrincipalContext as such: PrincipalContext pc = new PrincipalContext(ContextType.Domain) Next, I try to grab a principal: using (PrincipalContext pc = new PrincipalContext(ContextType.Domain)) { GroupPrincipal group = GroupPrincipal.FindByIdentity(pc, "MYDOMAIN\somegroup"); // snip: exception thrown by line above. } Both the above and UserPrincipal.FindByIdentity with a user SAM throw a DirectoryServicesCOMException: "Logon failure: Unknown user name or bad password" I've tried passing in a complete SAMAccountName to either FindByIdentity (in the form of MYDOMAIN\username) or just the username with no change in behavior. I've tried executing the code with other credentials using both the HostingEnvironment.Impersonate and SPSecurity.RunWithElevatedPrivileges approaches and also experience the same result. I've also tried instantiating my context with the domain name in place: Principal Context pc = new PrincipalContext(ContextType.Domain, "MYDOMAIN"); This throws a PrincipalServerDownException: "The server could not be contacted." I'm working on a reasonably hardened server. I did not lock the system down so I am unsure exactly what has been done to it. If there are credentials I need to allocate to my pool identity's user or in the domain security policy in order for these to work, I can configure the domain accordingly. Are there any settings that would be preventing my code from running? Am I missing something in the code itself? Is this just not possible in a SharePoint web? EDIT: Given further testing, my code functions correctly when tested in a Console application targeting .NET 4.0. I targeted a different framework because I didn't have AccountManagement available to me in the console app when targeting .NET 3.5 for some reason. using (PrincipalContext pc = new PrincipalContext(ContextType.Domain)) using (UserPrincipal adUser = UserPrincipal.FindByIdentity(pc, "MYDOMAIN\joe.user")) using (GroupPrincipal adGroup = GroupPrincipal.FindByIdentity(pc, "MYDOMAIN\user group")) { if (adUser.IsMemberOf(adGroup)) { Console.WriteLine("User is a member!"); } else { Console.WriteLine("User is NOT a member."); } } What varies in my SharePoint environment that might prohibit this function from executing?

    Read the article

  • System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host

    - by coffeeaddict
    We have 2 identical database tables on our dev server. In both is a table that holds an X509Certificate2 data in one of its fields. When I grab the cert along with the password that I've been using all along, it works over the first database just fine. I run the same code though over the 2nd database and get this error and I don't get why if the setup is exactly the same and the database is also on this server. So I'm not sure why when I switch my connection string to talk to Database2, even though it's setup the same and the code I'm running over it is the same, it complains. Here's the stack trace: An existing connection was forcibly closed by the remote host Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SocketException (0x2746): An existing connection was forcibly closed by the remote host] System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) +232 [IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.] System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) +7035903 System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) +58 System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) +116 System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) +123 System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) +86 System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) +123 System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) +86 System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) +123 System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) +86 System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) +123 System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) +7184357 System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) +217 System.Threading.ExecutionContext.runTryCode(Object userData) +376 System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) +0 System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) +98 System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) +1134 System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) +88 System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) +20 System.Net.ConnectStream.WriteHeaders(Boolean async) +360 [WebException: The underlying connection was closed: An unexpected error occurred on a send.] System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) +857631 System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) +10 System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) +243 Pmall.PayPal.PayPalApi.PayPalAPIAASoapBinding.SetExpressCheckout(SetExpressCheckoutReq SetExpressCheckoutReq) in C:\www\ssss\ssss\ssss\ssss\Reference.cs:1304 ssss.PayPal.ExpressCheckout.PayPalCheckout.SetExpressCheckout(PaymentDetailsType[] paymentDetails, String returnURL, String cancelURL, PayPalPaymentFlowType paymentFlowType) in C:\www\ssss\ssss\ssss\PayPalCheckout.cs:96 ssss.Web.ssss.SetExpressCheckout() in C:\www\ssss\ssss\ssss.aspx.cs:83 ssss.Web.ssss.Page_Load(Object sender, EventArgs e) in C:\www\ssss\ssss\Register.aspx.cs:24 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +25 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +42 System.Web.UI.Control.OnLoad(EventArgs e) +132 System.Web.UI.Control.LoadRecursive() +66 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2428

    Read the article

  • How to resovle javax.mail.AuthenticationFailedException issue?

    - by jl
    Hi, I am doing a sendMail Servlet with JavaMail. I have javax.mail.AuthenticationFailedException on my output. Can anyone please help me out? Thanks. sendMailServlet code: try { String host = "smtp.gmail.com"; String from = "[email protected]"; String pass = "pass"; Properties props = System.getProperties(); props.put("mail.smtp.starttls.enable", "true"); props.put("mail.smtp.host", host); props.put("mail.smtp.user", from); props.put("mail.smtp.password", pass); props.put("mail.smtp.port", "587"); props.put("mail.smtp.auth", "true"); props.put("mail.debug", "true"); Session session = Session.getDefaultInstance(props, null); MimeMessage message = new MimeMessage(session); Address fromAddress = new InternetAddress(from); Address toAddress = new InternetAddress("[email protected]"); message.setFrom(fromAddress); message.setRecipient(Message.RecipientType.TO, toAddress); message.setSubject("Testing JavaMail"); message.setText("Welcome to JavaMail"); Transport transport = session.getTransport("smtp"); transport.connect(host, from, pass); message.saveChanges(); Transport.send(message); transport.close(); }catch(Exception ex){ out.println("<html><head></head><body>"); out.println("ERROR: " + ex); out.println("</body></html>"); } Output on GlassFish 2.1: DEBUG SMTP: trying to connect to host "smtp.gmail.com", port 587, isSSL false 220 mx.google.com ESMTP 36sm10907668yxh.13 DEBUG SMTP: connected to host "smtp.gmail.com", port: 587 EHLO platform-4cfaca 250-mx.google.com at your service, [203.126.159.130] 250-SIZE 35651584 250-8BITMIME 250-STARTTLS 250-ENHANCEDSTATUSCODES 250 PIPELINING DEBUG SMTP: Found extension "SIZE", arg "35651584" DEBUG SMTP: Found extension "8BITMIME", arg "" DEBUG SMTP: Found extension "STARTTLS", arg "" DEBUG SMTP: Found extension "ENHANCEDSTATUSCODES", arg "" DEBUG SMTP: Found extension "PIPELINING", arg "" STARTTLS 220 2.0.0 Ready to start TLS EHLO platform-4cfaca 250-mx.google.com at your service, [203.126.159.130] 250-SIZE 35651584 250-8BITMIME 250-AUTH LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250 PIPELINING DEBUG SMTP: Found extension "SIZE", arg "35651584" DEBUG SMTP: Found extension "8BITMIME", arg "" DEBUG SMTP: Found extension "AUTH", arg "LOGIN PLAIN" DEBUG SMTP: Found extension "ENHANCEDSTATUSCODES", arg "" DEBUG SMTP: Found extension "PIPELINING", arg "" DEBUG SMTP: Attempt to authenticate AUTH LOGIN 334 VXNlcm5hbWU6 aWpveWNlbGVvbmdAZ21haWwuY29t 334 UGFzc3dvcmQ6 MTIzNDU2Nzhf 235 2.7.0 Accepted DEBUG: getProvider() returning javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc] DEBUG SMTP: useEhlo true, useAuth true

    Read the article

  • Jquery and live act wired but act ok after force refresh in IE

    - by Y.G.J
    hi guys - somthing is wrong with my code and i can't get what it is... i have a div id = "personaltab" i have a form in side it to login the user with username and password. if success the jquery empty the div and puts in the form of the bidding. if the user try to bid the other ajax that assign to the button is working but for some reason skips the empty and just adding the responded ajax to the div i have checked that in IE and chrome and it is working fine in chrome here are my codes $("#login").click(function() { var id = $("input#pid").val(); var user = $("input#puser").val(); var pass = $("input#ppass").val(); var dataString = 'id='+ id + '&user='+ user + '&pass=' + pass; if (user == "") { alert("error"); $("input#puser").focus(); return false; } if (pass == "") { alert("error"); $("input#ppass").focus(); return false; } $.ajax({ type: "POST", url: "loginpersonal.asp", data: dataString, success: function(msg) { if (msg=="False") { alert("error"); $("#personaltab").show(); } else { $("#personaltab").fadeOut("normal",function(){ $("#personaltab").empty(); $("#personaltab").append(msg); $("#personaltab").slideDown(); }); } }, error: function (XMLHttpRequest, textStatus, errorThrown) { alert('error'); } }); return false; }); $("#sendbid").live("click", function(){ var startat = $("input[name=startat]").val(); var sprice = $("input[name=sprice]").val(); if (parseInt(sprice)<=parseInt(startat)) { alert("error"); $("input[name=sprice]").focus(); return false; } else { var payment = $("select[name=payment]").val(); if ($('input[name=credit]').is(':checked') ){ var credit = true; } var prodid = $("input[name=id]").val(); var dataString = 'id='+ prodid + '&price='+ sprice + '&payment=' + payment + '&credit=' + credit; $.ajax({ type: "POST", url: "loginpersonal.asp", data: dataString, success: function(msg) { $("#personaltab").empty(); $("#personaltab").append(msg); $("#personaltab").show(); }, error: function (XMLHttpRequest, textStatus, errorThrown) { alert('error'); } }); } return false; });

    Read the article

  • IdHttp Post Method Delphi 2010

    - by Dusten S
    Like others before me, I'm having troubles using the IdHttp(Indy 10.5.5) component in Delphi 2010. The code works fine in Delphi 7: var XMLString : AnsiString; lService : AnsiString; ResponseStream: TMemoryStream; InputStringList : TStringList; begin ResponseStream := TMemoryStream.Create; InputStringList := TStringList.Create; XMLString :='<?xml version="1.0" encoding="ISO-8859-1"?> '+ '<!DOCTYPE pnet_imessage_send PUBLIC "-//PeopleNet//pnet_imessage_send" "http://open.peoplenetonline.com/dtd/pnet_imessage_send.dtd"> '+ '<pnet_imessage_send> '+ ' <cid>username</cid> '+ ' <pw>password</pw> '+ ' <vehicle_number>tr01</vehicle_number> '+ ' <deliver>now</deliver> '+ ' <action> '+ ' <action_type>reply_with_freeform</action_type> '+ ' <urgent_reply>yes</urgent_reply> '+ ' </action> '+ ' <freeform_message>Test Message Version 2</freeform_message> '+ '</pnet_imessage_send> '; lService := 'imessage_send'; InputStringList.Values['service'] := lService; InputStringList.Values['xml'] := XMLString; try IdHttp1.Request.Accept := '*/*'; IdHttp1.Request.ContentType := 'text/XML'; IdHTTP1.Post('http://open.peoplenetonline.com/scripts/open.dll', InputStringList, ResponseStream); ... finally ResponseStream.Free; InputStringList.Free; end; The only differences so far between this and the D7 code is that I've changed the String types to AnsiString, and added the HTTP Request properties. The response I get back from the server is 'XML failed to parse. Whitespace expected at Line:1 Position: 19', I'm assuming the XML got garbled up somewhere in the process, but I can't figure our where I'm going wrong. Any ideas?

    Read the article

  • Snow Leopard sqlite3-ruby install problem

    - by JZ
    UPDATE 3/20/10 I'm running Mac OSX Snow Leopard, this problem is caused by a recent train wreck in which I updated ruby without RVM. I've attempted to properly install/run RVM, however I can't get it to work. I am unable to install the sqlite3-ruby gem. I get the following ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. How do I fix this? justin-zollarss-mac-pro:~ justinz$ rails -v Rails 2.3.5 justin-zollarss-mac-pro:~ justinz$ ruby -v ruby 1.8.7 (2008-08-11 patchlevel 72) [i686-darwin10.2.0] justin-zollarss-mac-pro:~ justinz$ gem -v 1.3.5 justin-zollarss-mac-pro:~ justinz$ which gem /usr/local/bin/gem justin-zollarss-mac-pro:~ justinz$ whereis gem /usr/bin/gem justin-zollarss-mac-pro:~ justinz$ which ruby /usr/local/bin/ruby justin-zollarss-mac-pro:~ justinz$ whereis ruby /usr/bin/ruby justin-zollarss-mac-pro:~ justinz$ which rails /usr/local/bin/rails justin-zollarss-mac-pro:~ justinz$ whereis rails /usr/bin/rails justin-zollarss-mac-pro:~ justinz$ gem list *** LOCAL GEMS *** actionmailer (2.3.5) actionpack (2.3.5) activerecord (2.3.5) activeresource (2.3.5) activesupport (2.3.5) builder (2.1.2) bundler (0.9.11) columnize (0.3.1) erubis (2.6.5) fastercsv (1.5.1) ffi (0.6.3) gbarcode (0.98.16) i18n (0.3.5) linecache (0.43) mail (2.1.3) memcache-client (1.8.0) prawn (0.8.4) prawn-core (0.8.4) prawn-layout (0.8.4) prawn-security (0.8.4) rack (1.1.0, 1.0.1) rack-mount (0.6.1) rack-test (0.5.3) rails (2.3.5) rake (0.8.7) ruby-debug (0.10.3) ruby-debug-base (0.10.3) rubygems-update (1.3.6) sqlite3 (0.0.8) text-format (1.0.0) thor (0.13.4) tzinfo (0.3.17) justin-zollarss-mac-pro:~ justinz$ sudo gem install sqlite3-ruby Password: Building native extensions. This could take a while... ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb checking for fdatasync() in -lrt... no checking for sqlite3.h... yes checking for sqlite3_open() in -lsqlite3... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/local/bin/ruby --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib --with-rtlib --without-rtlib --with-sqlite3lib --without-sqlite3lib Gem files will remain installed in /usr/local/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5 for inspection. Results logged to /usr/local/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5/ext/sqlite3_api/gem_make.out

    Read the article

  • Download a file with DefaultHTTPClient and preemptive authentication

    - by Nils
    After I had a lot of problems with preemptive authentication , I got it finally working. Now the next problem. I want to get a file with it, but I don't know how. I thought the file data might be in the variable response, but it isn't. Any ideas how this might work? I'm trying it since days without success :( - Basically I'm trying to download an jpeg file, which is on a server protected by prem. auth. // BASIC AUTH /* * ==================================================================== * * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * ==================================================================== * * This software consists of voluntary contributions made by many * individuals on behalf of the Apache Software Foundation. For more * information on the Apache Software Foundation, please see * <http://www.apache.org/>. */ //http://svn.apache.org/repos/asf/httpcomponents/httpclient/branches/4.0.x/httpclient/src/examples/org/apache/http/examples/client/ClientPreemptiveBasicAuthentication.java httpclient = new DefaultHttpClient(); httpclient.getCredentialsProvider().setCredentials( new AuthScope(host, port), new UsernamePasswordCredentials(username, password)); // Generate BASIC scheme object and stick it to the local // execution context BasicHttpContext localcontext = new BasicHttpContext(); BasicScheme basicAuth = new BasicScheme(); localcontext.setAttribute("preemptive-auth", basicAuth); //first request interceptor httpclient.addRequestInterceptor(new PreemptiveAuth(), 0); HttpHost targetHost = new HttpHost(host, port, "http"); //HttpGet httpget = new HttpGet("/"); HttpGet httpget = new HttpGet(http.url); System.out.println("executing request" + httpget.getRequestLine()); /// !!! HttpResponse response = httpclient.execute(targetHost, httpget, localcontext); HttpEntity entity = response.getEntity(); System.out.println("----------------------------------------"); System.out.println("+"+response.getStatusLine()+"+"); ...

    Read the article

  • Using Wicked with Devise for a sign up wizard

    - by demondeac11
    I am using devise with wicked to create a sign up wizard, but I am unsure about a problem I am having creating profiles. After a user provides their email & password they are forwarded to a step to create a profile based on whether they have specified they are a shipper or carrier. However I am unsure what the code should be in the controller and the forms to generically create a profile. Here is the code I have for the application: The steps controller: class UserStepsController < ApplicationController include Wicked::Wizard steps :carrier_profile, :shipper_profile def create @user = User.last case step when :carrier_profile @profile = CarrierProfile.create!(:dot => params[:dot]) if @profile.save render_wizard @user else flash[:alert] = "Record not saved" end when :shipper_profile @profile = ShipperProfile.create!(params[:shipper_profile) if @profile.save render_wizard @user else flash[:alert] = "Record not saved" end end end end end def show @user = User.last @carrier_profile = CarrierProfile.new @shipper_profile = ShipperProfile.new case step when :carrier_profile skip_step if @user.shipper? when :shipper_profile skip_step if @user.carrier? end render_wizard end end The form for a carrier profile: <% form_for @carrier_profile , url: wizard_path, method: :post do |f| %> <div> <%= f.label :dot, "Please enter your DOT Number:" %> <%= f.text_field :dot %> </div> <%= f.submit "Next Step", class: "btn btn-primary" %> <% end %> The form for a shipper profile: <% form_for @shipper_profile , url: wizard_path, method: :post do |f| %> <div> <%= f.label :company_name, "What is your company name?" %> <%= f.text_field :company_name %> </div> <%= f.submit "Next Step", class: "btn btn-primary" %> <% end %> The user model: class User < ActiveRecord::Base has_one :carrier_profile has_one :shipper_profile end How would I be able to write a generic new and create method to handle creating both profiles? With the current code it is stating that the user_steps controller has no POST method, although if I run rake routes I find that this is untrue.

    Read the article

  • problem with NHibernate and iSeries DB2

    - by chrisjlong
    Ok So I have an AS400/iSeries running v5r4. I have an application that was using classic NHibernate to connect and do some basic crud. Now I have pulled that app (which sat for 2 years) off the shelf of TFS and onto a new PC and cannot seem to get it running. Here is my Hibernate Config: <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="connection.provider"> NHibernate.Connection.DriverConnectionProvider </property> <property name="dialect"> NHibernate.Dialect.DB2400Dialect </property> <property name="proxyfactory.factory_class">NHibernate.ByteCode.LinFu.ProxyFactoryFactory, NHibernate.ByteCode.LinFu</property> <property name="connection.connection_string"> DataSource=207.206.106.19; Database=AS400; userID=XXXXXX; Password=XXXXXXX; LibraryList=FMSFILTST,BEFFILT,HRDBFT,HRCSTFT,J20##X2DEV,GLCUSTDEV,OSL@@F3DEV; Naming=System; Initial Catalog=*SYSBAS; </property> <property name="use_outer_join">true</property> <property name="query.substitutions"> true 1, false 0, yes 'Y', no 'N' </property> <property name="show_sql">false</property> <mapping assembly="BusinessLogic" /> </session-factory> </hibernate-configuration> I have all the proper DLL's included (NHibernate, castle, iesi, antlr3 , log4 etc). Also have this line in my web.config <runtime> <assemblyBinding> <qualifyAssembly partialName="IBM.Data.DB2.iSeries" fullName="IBM.Data.DB2.iSeries,Version=10.0.0.0,PublicKeyToken=9CDB2EBFB1F93A26,Culture=neutral"/> </assemblyBinding> </runtime> Yet I am still getting the following error as soon as I call NHibernate.Cfg.Configuration().Configure().BuildSessionFactory().OpenSession(); The error is as follows Unable to cast object of type 'IBM.Data.DB2.iSeries.iDB2Connection' to type 'System.Data.Common.DbCommand' I am dying to get some help with this. Any assistance is appreciated. Thanks!

    Read the article

< Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >