Search Results

Search found 7702 results on 309 pages for 'nested includes'.

Page 273/309 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • Rosewill RSV-S5 and it's transferespeeds

    - by DoomStone
    I have just bought a Rosewill RSV-S5, I have installed 5x1,5Tb Western Digital Green disks in it. After that have I created a Raid5 on them all with the software that followed with the hardware. Not the raid it self works fine, but it is SLOW, I can only obtain a maximum of 25 MB/s, and if SABnzbd+ is downloading with 5 MB/s is it having a hard time streaming a normal DIVX (700 mb) movie. Is this normal or is there something wrong? Edit: should be able to handle 3 Gbps = 384 megabytes / second Edit 2: As you can see am I only downloading with 3,76 MB/s and I'm trying to watch V s02e08 (720p), but it is completely unwatchable, as I can see 30 sec, and the it buffers for 20 sec. Edit: Other information there might be required I'm running Windows Server 2008 R2, optimized for program performance. Windows is installed on a 60GB SSD. I have a 50 Mb/s internet connection and a 1 Gb/s LAN, all connected with Cat6 Ethernet cables. The MCE is using a Gigabyte EP35C-DS3R motherboard with 2 GB DDR2 ram. Edit 3: I have used chunk sizes for 128 KB Edit 4: I found this on newegg Pros: Enclosure for 5x2TB hard drive is fine. This is basically a rebranded San Digital TR5M-B product. For support Rosewill tells you to contact San Digital. No direct support from Silicon Image for the computer raid card. Cons: Includes computer Silicon Image 3132 raid card, extremely slow raid 5 write (our tests ~10MB/s). Compare to regular internal local drive write 30-60MB/s. We basically dumped the Sil3132 card and replaced with High Point RocketRaid 622 card for extra $69.99. Note for RR622, turn off ECRC (end to end CRC check) for card to work on IBM xserver. What took 12hrs to copy now took 2-3hrs. San Digital realized the problem and has the newer model TR5M-BP TowerRaid Plus that comes with High Point RocketRaid 622 card. Rosewill should discontinue this product and go with TR5M-BP. Could not get Silicon Image raid management software to work with complicated 2008R2 server with 10 NICs, application doesn't know how to talk to localhost port with all those NICs. No updates from Silicon Image and support from San Digital ignored. Gave up on Sil3132 card. Save yourself from a lot of headaches, get the RR622 card too if you are going to buy this product. Other Thoughts: The newer model is TR5M-BP TowerRaid Plus, comes with High Point RocketRaid 622 raid card for the PC instead of Silicon Image Sil3132. According to San Digital, raid 5 performance for Sil3132 read 80MB/s write 19MB/s, and RR622 read 154MB/s write 149MB/s. Our RR622 tests gave (8TB raid 5) write ~80-110MB/s copying 40GB file took 8mins. So I have now ordered a HighPoint RocketRAID 622 2P ext SATA III and hopes that it will solve my problems.

    Read the article

  • Choice of an OS for a home ZFS NAS

    - by OlafM
    I am preparing a home NAS with an old Athlon 64 X2 3800+, 4 GB ECC RAM, Asus M2V MX motherboard, and a single 3 TB WDC Green (another one as mirror may be installed in the future). It's the cheapest solution I found that includes ECC memory and the higher energy consumption is offset by the lower (zero) cost of acquisition. The system will be used for: music storage and stream to other desktop computers; storage of the scanned dia slides (3-4k slides, 180 MB TIFF each one plus reduced quality JPEG version); stream of these photos to a local iPad 2 (maybe Plex App? not yet sure); (one additional) remote backup via rsync/ssh or ZFS send/receive. It will be controlled via remote ssh, maybe VNC, no monitor attached. Absolute requirement is a reliable ZFS solution, plus the ability to easily install packets/software/virtual machines and to update remotely (I will be the admin and I don't live near the NAS). I have mainly three options: NAS4free/FreeNAS OpenIndiana Solaris Express 11 (yeah yeah I know the license requirements, I will write a perl script on it to count it as development machine). Problems: NAS4free/FreeNAS (I tested only NAS4free) required embedded installation for remote upgrading, but full install for easy addition of software packets. Since I need at least AirVideo Server (linux/win) and Plex App (win/linux) to stream the photos and some videos to iPad (they both require virtualbox), but I cannot be there to install updates, NAS4free/FreeNAS are excluded. http://www.nas4free.org/general_information.html explains the issue: embedded can be remotely updated, full cannot. Solaris has also another advantage: Crashplan client supports Solaris and I'm already using it for other backups. I would like to leave the option open, even if I will be doing backups probably through zfs send/receive. NexentaStor was left out because zfs send/receive are not included in the free version. The question is now Solaris 11 Express over OpenIndiana. To ease the management, I will be using http://www.napp-it.org Which one would you suggest and why? I found lots of informations and it's difficult for me to decide. I think (from the napp-it manual) that Solaris has some additional options for SMB shares, but are they really needed at home? I think I won't even use ACLs, since normal unix-style permissions are enough. OpenIndiana has maybe more frequent updates (Solaris offers only security updates between releases), but again, do I need them? I don't think so. Moreover, this is a NAS that has to work and nothing else, I cannot risk having problems that require me to access the server. Isn't OpenIndiana a bit more... cutting edge (in the Solaris world)? I'm just asking, no need to focus on this for the answer :-) I would limit myself to these two options (SE11.1/OI) also because I will be making a NAS for me in the future (where high performances with Mac shares are also required) and Solaris has kernel support for AFP. I will use this server to gather experience as well. After this long question, thanks in advance! If you need additional info, let me know and I will update this post. UPDATES Given the first answers, I will strongly suggest the person paying the hardware to insert a second HD. Better 2x2TB than 1x3TB (3 TB is oversized anyway). I was trying to keep the initial costs down to spread them over a longer period, but better having something good from the beginning.

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • How can I make subversion reset the stored passwords/users and remember my authentication credential

    - by NicDumZ
    Hello folks! Background: I used to have everything working just fine on my fresh install: $ svn co https://domain:443/ test1 Error validating server certificate for 'https://domain:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! Certificate information: - Hostname: **REMOVED** - Valid: **REMOVED** - Issuer: **REMOVED** - Fingerprint: **checked with issuer and REMOVED** (R)eject, accept (t)emporarily or accept (p)ermanently? p Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz-machine-hostname': Authentication realm: <https://domain:443> Subversion repository Username: nicdumz Password for 'nicdumz': # proceeds to checkout correctly $ svn co https://domain:443/ test2 # checkouts nicely, without asking for my password. At some point I needed to commit stuff using a different account. So I did that $ svn ci --username other.user Authentication realm: <https://domain:443> Subversion repository Password for 'other.user': # works fine But since then, everytime I want to commit as 'nicdumz' (default user, all repos have been checked-out with that user), it prompts me for my password: $ svn ci Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz': Hey come on, why :) The same happens if I want a fresh checkout, since read-access is also protected. So I tried fixing the issue by myself. I read around that ~/.subversion/auth was storing credentials, so I removed it from the way: $ cd ~/.subversion $ mv auth oldauth $ mkdir auth It seemed to work at first, because svn had forgotten about certificate validation: $ svn co https://domain:443/ test3 Error validating server certificate for 'https://domain:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! Certificate information: - Hostname: **REMOVED** - Valid: **REMOVED** - Issuer: **REMOVED** - Fingerprint: **checked with issuer and REMOVED** (R)eject, accept (t)emporarily or accept (p)ermanently? p Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz-machine-hostname': Authentication realm: <https://domain:443> Subversion repository Username: nicdumz Password for 'nicdumz': # proceeds to checkout correctly $ svn up Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz': What? how is this happening? If you have suggestions to investigate more about the behaviour, I am very interested. If I'm correct, there is no way to do a verbose svn up or anything of the like, so I'm not sure should I go for investigation. Oh, and for what it's worth: $ svn --version svn, version 1.6.6 (r40053) compiled Oct 26 2009, 06:19:08 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: * ra_neon : Module for accessing a repository via WebDAV protocol using Neon. - handles 'http' scheme - handles 'https' scheme * ra_svn : Module for accessing a repository using the svn network protocol. - with Cyrus SASL authentication - handles 'svn' scheme * ra_local : Module for accessing a repository on local disk. - handles 'file' scheme * ra_serf : Module for accessing a repository via WebDAV protocol using serf. - handles 'http' scheme - handles 'https' scheme

    Read the article

  • In Linux, what's the best way to delegate administration responsibilities, like for Apache, a database, or some other application?

    - by Andrew Banks
    In Linux, what's the best way to delegate administration responsibilities for Apache and other "applications"? File permissions? Sudo? A mix of both? Something else? At work we have two tiers of "administrators" Operating system administrators. These are your run-of-the-mill "server administrators." They are responsible for just the operating system. Application administrators. The people who build the web site. This includes not only writing the SQL, PHP, and HTML, but also setting up and running Apache and PostgreSQL or MySQL. The aforementioned OS admins will install this stuff, but it's mainly up to the app admins to edit all the config files, start and stop processes when needed, and so on. I am one of the app admins. This is different than what I am used to. I used to just write code. The sysadmin took care not only of the OS but also installing, setting up, and keeping up the server software. But he left. Now I'm in charge of setting up Apache and the database. The new sysadmins say they just handle the operating system. It's no problem. I welcome learning new stuff. But there is a learning curve, even for the OS admins. Apache, by default, seems to be set up for administration by root directly. All the config files and scripts are 644 and owned by root:root. I'm not given the root password, naturally, so the OS admins must somehow give my ordinary OS user account all the rights necessary to edit Apache's config files, start and stop it, read its log files, and so on. Right now they're using a mix of: (1) giving me certain sudo rights, (2) adding me to certain groups, and (3) changing the file permissions of various directories, to make them writable by one of the groups I'm in. This never goes smoothly. There's always a back-and-forth between me and the sysadmins. They say it's ready. Then I try certain things, and half of them I still can't do. So they make some more changes. Then finally I seem to be independent and can administer Apache and the database without pestering them anymore. It's the sheer complication and amount of changes that make me uncomfortable. Even though it finally works, more or less, it seems hackneyed. I feel like we're doing it wrong. It seems like the makers of the software would have anticipated this scenario (someone other than root administering it) and have a clean two- or three-step program to delegate responsibility to me. But it feels like we are really chewing up the filesystem and making it far and away from the default set-up. Any suggestions? Are we doing it the recommended way? P.S. For PostgreSQL it seems a little better. Its files are owned by a system user named postgres. So giving me the right to run sudo su - postgres gives me just about everything. I'm just now getting into MySQL, but it seems to be set up similarly. But it seems a little weird doing all my work as another user.

    Read the article

  • Issue in nginx proxying to apache

    - by Luis Masuelli
    My current nginx configuration is as follows: specific configuration for (currently two) domains: server { listen 443 ssl; server_name studiotv.service.tebusco.lan phpmyadmin.service.tebusco.lan; ssl_certificate /home/administrador/nginx-confs/ssl/service.tebusco.lan.crt; ssl_certificate_key /home/administrador/nginx-confs/ssl/service.tebusco.lan.key; location / { proxy_pass http://127.0.0.1:8180; proxy_set_header Host $http_host:8180; } } default configuration for unmatched ssl connections: server { listen 443 default ssl; ssl_certificate /home/administrador/nginx-confs/ssl/service.tebusco.lan.crt; ssl_certificate_key /home/administrador/nginx-confs/ssl/service.tebusco.lan.key; location / { return 403; } } http configuration: server { listen 80; rewrite ^ https://$host$request_uri? permanent; } The intention is clear: Redirect http traffic to https. Proxy each https:// call from phpmyadmin.service.tebusco.lan and studiotv.service.tebusco.lan to apache2. This includes passing a host header, which is detected. Each unmatched ssl connection must return a 403 in nginx. Does not even reach apache2. In the apache2 side of the life, I have a default site, and a non-default site which will match studiotv.service.tebusco.lan: 000-default.conf file (available and enabled): <VirtualHost 127.0.0.1:8180> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. ServerName localhost ServerAdmin webmaster@localhost DocumentRoot /var/www/html <Directory /var/www/html> Order deny,allow Require all granted </Directory> </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet studiotv.conf file (available and enabled): <VirtualHost *:8180> ServerName studiotv.service.tebusco.lan ServerAdmin [email protected] DocumentRoot /var/www/studiotv <Directory /var/www/studiotv/> Options -Indexes +FollowSymLinks AllowOverride None Order deny,allow Allow from all Require all granted </Directory> # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn # No usamos ${APACHE_LOG_DIR} sino en su lugar /var/log/<host> ErrorLog /var/log/apache2/studiotv/error.log CustomLog /var/log/apache2/studiotv/access.log combined </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet However, when I hit the browser with http://studiotv.service.tebusco.lan, the default php page is shown instead. Question: What am I missing? (apache 2.4.7, nginx 1.6.0, ubuntu server 14.04).

    Read the article

  • Mod_Rewrite w Apache mod_jrun22.so & ColdFusion 9 on cPanel

    - by Eddie B
    How can I utilize mod_rewrite at either the httpd.conf level or per-directory level when mod_jrun22 seems to have short-stopped the rewrite process for ColdFusion pages? I have a ColdFusion 9 based site running on Centos 5.8 w cPanel. cPanel uses EasyApache 3 to manage virtual host containers and as such the conf for mod_jrun22.so, /usr/local/apache/conf/includes/pre_main_global.conf, is loaded prior to the main httpd.conf with the domain specific rules for the container. My assertion is that .cfm pages are failing to be rewritten due to the mod_jk22.so module having priority in the directive chain. To note, I also have a WordPress blog in the site where the rewrites appear to be working fine. For example the following code to remove the index file works fine for php and fails with cfm ... .htaccess under /blog/ : This works Options -Indexes -Multiviews <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /blog/ RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /blog/index.php [L] </IfModule> .htaccess under / : This does not work as expected. Apache serves the page. ASSERT: This would redirect to domain.com/ without index.cfm Options -Indexes -Multiviews <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.cfm$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.cfm [L] </IfModule> .htaccess under / : This works I'm presuming this is working because the redirect is to another .cfm page and a 404 handler in Application.cfc ... Options -Indexes -Multiviews <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^.*\.cfm$ - [L] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{ENV:REDIRECT_STATUS} =404 RewriteRule . /404.cfm$ [L] </IfModule> I've attempted numerous different methods to rewrite .cfm urls ... Adding [PT], [L], [R], [NS], Moving the script to Directory blocks under httpd.conf --- all with the same results ... either the rewrite doesn't work or Apache crashes in an endless loop ... Any help would be greatly appreciated. Below is a single-visit rewrite log snippet for a request to /index.cfm ... the pass-through is taking effect before the rewrite ... cat rewrite_dump_mod | grep index.cfm [perdir /home/foo/public_html/] strip per-dir prefix: /home/foo/public_html/index.cfm -> index.cfm [perdir /home/foo/public_html/] applying pattern '^.*\.cfm$' to uri 'index.cfm' [perdir /home/foo/public_html/] pass through /home/foo/public_html/index.cfm [perdir /home/foo/public_html/] strip per-dir prefix: /home/foo/public_html/index.cfm -> index.cfm [perdir /home/foo/public_html/] applying pattern '^.*\.cfm$' to uri 'index.cfm' [perdir /home/foo/public_html/] pass through /home/foo/public_html/index.cfm * UPDATE * I've managed to figure this out ... it took a while ... Options -Indexes -Multiviews +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{THE_REQUEST} ^.*/index\.cfm RewriteRule ^(.*)index.cfm http://%{HTTP_HOST}/$1 [R=301,L] </IfModule>

    Read the article

  • Optimise Apache for EC2 micro instance

    - by Shiyu Sekam
    I'm running apache2 on a EC2 micro instance with ~600 mb RAM. The instance was running for almost a year without problems, but in the last weeks it just keeps crashing, because the server reached MaxClients. The server basically runs few websites, one wordpress blog(not often used), company website(most used) and 2 small sites, which are just internal. The database for the blog runs on RDS, so there's no Mysql running on this web server. When I came to the company, the server already was setup and is running apache + mod_php + prefork. We want to migrate that in the future to a nginx + php-fpm, but it still needs further testing. So for now I have to stick with the old setup. I also use CloudFlare DDOS protection in front of the server, because it was attacked a couple of the times in the last weeks. My company don't want to pay money for a better web server at this point, so I have to stick with the micro instance also. Additionally the code for the website we run is really bad and slow and sometimes a single page load can take up to 15 seconds. The whole website is dynamic and written in PHP, so caching isn't really an option here. It's a customized search for users. I've already turned off KeepAlive, which improved the performance a little bit. My prefork config looks like the following: StartServers 2 MinSpareServers 2 MaxSpareServers 5 ServerLimit 10 MaxClients 10 MaxRequestsPerChild 100 The server just becomes unresponsive after a while running and I've run the following command to see how many connections there are: netstat | grep http | wc -l 75 Trying to restart apache helps for a short moment, but after that a while the apache process(es) become unresponsive again. I've the following modules enabled(output of apache2ctl -M) Loaded Modules: core_module (static) log_config_module (static) logio_module (static) version_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) authz_host_module (shared) deflate_module (shared) dir_module (shared) expires_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) rewrite_module (shared) setenvif_module (shared) ssl_module (shared) status_module (shared) Syntax OK apache2.conf # Security ServerTokens OS ServerSignature On TraceEnable On ServerName "web.example.com" ServerRoot "/etc/apache2" PidFile ${APACHE_PID_FILE} Timeout 30 KeepAlive off User www-data Group www-data AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> <Directory /> Options FollowSymLinks AllowOverride None </Directory> DefaultType none HostnameLookups Off ErrorLog /var/log/apache2/error.log LogLevel warn EnableSendfile On #Listen 80 Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf Include /etc/apache2/ports.conf LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include /etc/apache2/conf.d/*.conf Include /etc/apache2/sites-enabled/*.conf Vhost of main site <VirtualHost *:80> ServerName www.example.com ## Vhost docroot DocumentRoot /srv/www/jenkins/Web ## Directories, there should at least be a declaration for /srv/www/jenkins/Web <Directory /srv/www/jenkins/Web> AllowOverride All Order allow,deny Allow from all </Directory> ## Load additional static includes ## Logging ErrorLog /var/log/apache2/www.example.com.error.log LogLevel warn ServerSignature Off CustomLog /var/log/apache2/www.example.com.access.log combined ## Rewrite rules RewriteEngine On RewriteCond %{HTTP_HOST} !^www.example.com$ RewriteRule ^.*$ http://www.example.com%{REQUEST_URI} [R=301,L] ## Server aliases ServerAlias www.example.invalid ServerAlias example.com ## Custom fragment <Location /srv/www/jenkins/Web/library> Order Deny,Allow Deny from all </Location> <Files ~ "^\.(.+)"> Order deny,allow deny from all </Files> </VirtualHost>

    Read the article

  • How does the Cloud compare to Colocation? And development too

    - by David
    Currently I/we run a SaaS web application where each subscriber has their own physical instance of the application in addition to their own database. The setup has each web application instance deployed on two different IIS boxes both for load-balancing and redundancy (the machines have their Windows Update install times 12 hours apart, for example). Databases are mirrored on two different SQL Server 2012 machines with AlwaysOn for uptime. I don't make use of SQL Server clustering (as it doesn't provide storage-level failover: we don't have a shared storage box). Because it's a Windows setup it means there are two Domain Controllers (we cheat: they're both Mac Minis, 17W each, which keeps our colo power costs low). Finally there's also an Exchange server (Mailbox, Hub Transport and Client Access). One of the SQL Servers also doubles-up as an Exchange Hub Transport. Running costs are about $700 a month for our quarter-rack colocation (which includes power and peering/transfer), then there's about $150 a month for SPLA licensing, so $850 a month in total. Then there's the hard-to-quantify cost of administration, but I reckon I spend a couple of hours a week checking-in on the servers: reviewing event logs, etc. I keep getting bombarded by ads and manufactured news stories about how great "the cloud" is. Back in 2008 when the cloud was taking off I was reading up about the proper "cloud" services like Google AppEngine, where you write in Python against Google's API and that's how they scale your application across servers and also use their database provider for scaling storage. Simple enough to understand. Then came along Amazon, and I understand how Amazon Storage works, but I'm not sure how Amazon Compute works: web application pages don't take much CPU time to compute, how do you even quantify usage anyway? Finally, RackSpace gets in the act and now I'm really confused. RackSpace advertise "Cloud" SQL Server 2012 available for about "$0.70 per hour", going by how they advertise it I thought the "hour" meant the sum of CPU time, IO blocking time, maybe time spent transferring data, so for a low-intensity application that works out pretty cheap then? Nope. I went on to a Sales Chat window and spoke to one of their advisors. They told me the $0.70/hour was actually for every hour the SQL Server is running... but who wants a SQL Server for only a few hours? You're going to need it available 24 hours a day for months on end. $0.70 * 24 * 31 works out at $520 a month, which is rediculously expensive for SQL Server. An SPLA license for SQL Server is only $50 a month or so. That $520 a month does not include "fanatical support", and you also need to stack on top the costs of the host Windows server instance too. From what I can tell, Rackspace's "Cloud" products seem like like an cynical rebranding of an overpriced VPS service, but priced by the hour. I have the same confusion about Windows Azure which uses similar terms to describe the products available, but I think that's because Azure offers both traditional shared webhosting in addition to their own APIs you can target for scalable applications.

    Read the article

  • Getting ConnectionTimeoutException with the host did not accept the connection within timeout

    - by Sanjana
    Can Some one Help me, how we can solve the following problem. nested exception is org.apache.commons.httpclient.ConnectTimeoutException: The host did not accept the connection within timeout of 10000 ms at org.springframework.remoting.httpinvoker.HttpInvokerClientInterceptor.convertHttpInvokerAccessException(HttpInvokerClientInterceptor.java:211) at org.springframework.remoting.httpinvoker.HttpInvokerClientInterceptor.invoke(HttpInvokerClientInterceptor.java:144) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy19.isEmployeeToken(Unknown Source) at com.clickandbuy.webapps.surfer.commons.ContextUtils.isEmployeeToken(ContextUtils.java:375) at com.clickandbuy.webapps.surfer.commons.ContextUtils.validateLogin(ContextUtils.java:248) at sun.reflect.GeneratedMethodAccessor1364.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.jboss.el.util.ReflectionUtil.invokeMethod(ReflectionUtil.java:329) at org.jboss.el.util.ReflectionUtil.invokeMethod(ReflectionUtil.java:274) at org.jboss.el.parser.AstMethodSuffix.getValue(AstMethodSuffix.java:59) at org.jboss.el.parser.AstValue.getValue(AstValue.java:67) at org.jboss.el.ValueExpressionImpl.getValue(ValueExpressionImpl.java:186) at org.springframework.binding.expression.el.BindingValueExpression.getValue(BindingValueExpression.java:54) at org.springframework.binding.expression.el.ELExpression.getValue(ELExpression.java:54) at org.springframework.webflow.action.EvaluateAction.doExecute(EvaluateAction.java:77) at org.springframework.webflow.action.AbstractAction.execute(AbstractAction.java:188) at org.springframework.webflow.execution.AnnotatedAction.execute(AnnotatedAction.java:145) at org.springframework.webflow.execution.ActionExecutor.execute(ActionExecutor.java:51) at org.springframework.webflow.engine.ActionState.doEnter(ActionState.java:101) at org.springframework.webflow.engine.State.enter(State.java:194) at org.springframework.webflow.engine.Flow.start(Flow.java:535) at org.springframework.webflow.engine.impl.FlowExecutionImpl.start(FlowExecutionImpl.java:364) at org.springframework.webflow.engine.impl.FlowExecutionImpl.start(FlowExecutionImpl.java:222) at org.springframework.webflow.executor.FlowExecutorImpl.launchExecution(FlowExecutorImpl.java:140) at org.springframework.webflow.mvc.servlet.FlowHandlerAdapter.handle(FlowHandlerAdapter.java:193) at org.springframework.webflow.mvc.servlet.FlowController.handleRequest(FlowController.java:174) at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:48) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:875) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:807) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:571) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:501) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.clickandbuy.webapps.surfer.commons.filter.LogUserIPFilter.doFilter(LogUserIPFilter.java:61) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.clickandbuy.webapps.surfer.commons.filter.AddHeaderFilter.doFilter(AddHeaderFilter.java:54) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at com.clickandbuy.webapps.commons.filter.SessionSizeFilter.doFilter(SessionSizeFilter.java:76) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:433) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:568) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:619) Caused by: org.apache.commons.httpclient.ConnectTimeoutException: The host did not accept the connection within timeout of 10000 ms at org.apache.commons.httpclient.protocol.ReflectionSocketFactory.createSocket(ReflectionSocketFactory.java:155) at org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:125) at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707) at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$HttpConnectionAdapter.open(MultiThreadedHttpConnectionManager.java:1361) at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387) at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171) at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397) at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323) at org.springframework.remoting.httpinvoker.CommonsHttpInvokerRequestExecutor.executePostMethod(CommonsHttpInvokerRequestExecutor.java:195) at org.springframework.remoting.httpinvoker.CommonsHttpInvokerRequestExecutor.doExecuteRequest(CommonsHttpInvokerRequestExecutor.java:129) at org.springframework.remoting.httpinvoker.AbstractHttpInvokerRequestExecutor.executeRequest(AbstractHttpInvokerRequestExecutor.java:136) at org.springframework.remoting.httpinvoker.HttpInvokerClientInterceptor.executeRequest(HttpInvokerClientInterceptor.java:191) at org.springframework.remoting.httpinvoker.HttpInvokerClientInterceptor.executeRequest(HttpInvokerClientInterceptor.java:173) at org.springframework.remoting.httpinvoker.HttpInvokerClientInterceptor.invoke(HttpInvokerClientInterceptor.java:141) ... 57 more Caused by: java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:519) at sun.reflect.GeneratedMethodAccessor284.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.commons.httpclient.protocol.ReflectionSocketFactory.createSocket(ReflectionSocketFactory.java:140) ... 70 more Thanks In Advance Sanjana

    Read the article

  • NHibernate mapping with optimistic-lock="version" and dynamic-update="true" is generating invalid up

    - by SteveBering
    I have an entity "Group" with an assigned ID which is added to an aggregate in order to persist it. This causes an issue because NHibernate can't tell if it is new or existing. To remedy this issue, I changed the mapping to make the Group entity use optimistic locking on a sql timestamp version column. This caused a new issue. Group has a bag of sub objects. So when NHibernate flushes a new group to the database, it first creates the Group record in the Groups table, then inserts each of the sub objects, then does an update of the Group records to update the timestamp value. However, the sql that is generated to complete the update is invalid when the mapping is both dynamic-update="true" and optimistic-lock="version". Here is the mapping: <class xmlns="urn:nhibernate-mapping-2.2" dynamic-update="true" mutable="true" optimistic-lock="version" name="Group" table="Groups"> <id name="GroupNumber" type="System.String, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="GroupNumber" length="5" /> <generator class="assigned" /> </id> <version generated="always" name="Timestamp" type="BinaryBlob" unsaved-value="null"> <column name="TS" not-null="false" sql-type="timestamp" /> </version> <property name="UID" update="false" type="System.Guid, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="GroupUID" unique="true" /> </property> <property name="Description" type="AnsiString"> <column name="GroupDescription" length="25" not-null="true" /> </property> <bag access="field.camelcase-underscore" cascade="all" inverse="true" lazy="true" name="Assignments" mutable="true" order-by="GroupAssignAssignment"> <key foreign-key="fk_Group_Assignments"> <column name="GroupNumber" /> </key> <one-to-many class="Assignment" /> </bag> <many-to-one class="Aggregate" name="Aggregate"> <column name="GroupParentID" not-null="true" /> </many-to-one> </class> </hibernate-mapping> When the mapping includes both the dynamic update and the optimistic lock, the sql generated is: UPDATE groups SET WHERE GroupNumber = 11111 AND TS=0x00000007877 This is obviously invalid as there are no SET statements. If I remove the dynamic update part, everything gets updated during this update statement instead. This makes the statement valid, but rather unnecessary. Has anyone seen this issue before? Am I missing something? Thanks, Steve

    Read the article

  • Why isn't the Spring AOP XML schema properly loaded when Tomcat loads & reads beans.xml

    - by chrisbunney
    I'm trying to use Spring's Schema Based AOP Support in Eclipse and am getting errors when trying to load the configuration in Tomcat. There are no errors in Eclipse and auto-complete works correctly for the aop namespace, however when I try to load the project into eclipse I get this error: 09:17:59,515 WARN XmlBeanDefinitionReader:47 - Ignored XML validation warning org.xml.sax.SAXParseException: schema_reference.4: Failed to read schema document 'http://www.springframework.org/schema/aop/spring-aop-2.5.xsd', because 1) could not find the document; 2) the document could not be read; 3) the root element of the document is not . Followed by: SEVERE: StandardWrapper.Throwable org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 39 in XML document from /WEB-INF/beans.xml is invalid; nested exception is org.xml.sax.SAXParseException: cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'aop:config'. Caused by: org.xml.sax.SAXParseException: cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'aop:config'. Based on this, it seems the schema is not being read when Tomcat parses the beans.xml file, leading to the <aop:config> element not being recognised. My beans.xml file is as follows: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jaxws="http://cxf.apache.org/jaxws" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd"> <!--import resource="classpath:META-INF/cxf/cxf.xml" /--> <!--import resource="classpath:META-INF/cxf/cxf-extension-soap.xml" /--> <!--import resource="classpath:META-INF/cxf/cxf-servlet.xml" /--> <!-- NOTE: endpointName attribute maps to wsdl:port@name & should be the same as the portName attribute in the @WebService annotation on the IWebServiceImpl class --> <!-- NOTE: serviceName attribute maps to wsdl:service@name & should be the same as the serviceName attribute in the @WebService annotation on the ASDIWebServiceImpl class --> <!-- NOTE: address attribute is the actual URL of the web service (relative to web app location) --> <jaxws:endpoint xmlns:tns="http://iwebservices.ourdomain/" id="iwebservices" implementor="ourdomain.iwebservices.IWebServiceImpl" endpointName="tns:IWebServiceImplPort" serviceName="tns:IWebService" address="/I" wsdlLocation="wsdl/I.wsdl"> <!-- To have CXF auto-generate WSDL on the fly, comment out the above wsdl attribute --> <jaxws:features> <bean class="org.apache.cxf.feature.LoggingFeature" /> </jaxws:features> </jaxws:endpoint> <aop:config> <aop:aspect id="myAspect" ref="aBean"> </aop:aspect> </aop:config> </beans> The <aop:config> element in my beans.xml file is copy-pasted from the Spring website to try and remove any possible source of error Can anyone shed any light on why this error is occurring and what I can do to fix it?

    Read the article

  • Spring security ldap: no declaration can be found for element 'ldap-authentication-provider'

    - by wuntee
    Following the spring-security documentation: http://static.springsource.org/spring-security/site/docs/3.0.x/reference/ldap.html I am trying to set up ldap authentication (very simple - just need to know if a user is authenticated or not, no authorities mapping needed) and have put this in my applicationContext-security.xml file <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.xsd"> ... <ldap-server url="ldap://adapps.company.com:389/dc=company,dc=com" /> <ldap-authentication-provider user-search-filter="(samaccountname={0})" user-search-base="dc=company,dc=com"/> The problem I run into is that it doesnt seem like ldap-authentication-provider; I fell like i may be missing some configuration isn the beans definition. The error I get when trying to run the application is: SEVERE: Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 27 in XML document from ServletContext resource [/WEB-INF/rvaContext-security.xml] is invalid; nested exception is org.xml.sax.SAXParseException: cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'ldap-authentication-provider'. at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:396) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:334) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:302) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:93) at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:130) at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:465) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:395) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:272) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:196) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3972) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4467) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardHost.start(StandardHost.java:722) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:516) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:593) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:592) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Caused by: org.xml.sax.SAXParseException: cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'ldap-authentication-provider'. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:236) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.error(ErrorHandlerWrapper.java:172) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:382) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:316) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator$XSIErrorReporter.reportError(XMLSchemaValidator.java:429) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.reportSchemaError(XMLSchemaValidator.java:3185) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.handleStartElement(XMLSchemaValidator.java:1955) at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.emptyElement(XMLSchemaValidator.java:725) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:322) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(XMLDocumentFragmentScannerImpl.java:1693) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:368) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:834) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:764) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:148) at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:250) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.springframework.beans.factory.xml.DefaultDocumentLoader.loadDocument(DefaultDocumentLoader.java:75) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:388) ... 28 more Can anyone see what im missing? Also, is that all I need to add to the security bean in order to authenticate against ldap?

    Read the article

  • Loop through values and display in a pdf file

    - by chupinette
    Hello all! i have written the following code: As you can see there is a for loop to go through some values and display them in the generated pdf. The problem is that all the values are being written at the same place. I have tried to insert a new line but it does not seem to work. Can anyone suggest me how i can do it? Do i need to write a nested for loop so that it the values at different y positions? $pdf = pdf_new(); // open a file pdf_open_file($pdf, "C:/xampp/htdocs/final/6.pdf"); pdf_set_info($pdf, "Author", ""); pdf_set_info($pdf, "Title", ""); pdf_set_info($pdf, "Creator", ""); pdf_set_info($pdf, "Subject", ""); // start a new page (A4) $x = 595; $y = 842; pdf_begin_page($pdf, $x, $y); pdf_set_parameter($pdf, 'FontOutline', 'Arial=c:\windows\fonts\arial.ttf'); pdf_setcolor($pdf, "stroke", "rgb", 0, 0, 0, 1.0); // get and use a font object $font = pdf_findfont($pdf, "Arial", "host", 1); pdf_setfont($pdf, $font, 10); // print text pdf_show_xy($pdf, "QUOTATION" , 250, $y - 60); pdf_show_xy($pdf, "Customer Name: " . $this->customer_details['first_name'] . " " . $this->customer_details['last_name'], 50, 770); pdf_show_xy($pdf, "Date: " . date("F j, Y, g:i a"), 50, 750); pdf_show_xy($pdf, "Number of items requested: " . $count_items_req, 50, 730); pdf_show_xy($pdf, "Number of items found: " . $count_items_found, 50, 710); // add an image under the text $image = $image = PDF_load_image($pdf, "png", "C:/xampp/htdocs/final/images/footer_logo.png", ""); PDF_fit_image($pdf, $image, 50, 785, ""); pdf_moveto($pdf, 20, 780); pdf_lineto($pdf, 575, 780); pdf_stroke($pdf); // draw another line near the bottom of the page pdf_moveto($pdf, 20, 50); pdf_lineto($pdf, 575, 50); pdf_stroke($pdf); //Draw the lines $offset = 184; $i = 0; pdf_moveto($pdf, 20, $y - 160); pdf_lineto($pdf, $x - 20, $y - 160); pdf_stroke($pdf); pdf_moveto($pdf, $x - 400, $y - 160); pdf_lineto($pdf, $x - 400, 80); pdf_stroke($pdf); pdf_moveto($pdf, $x - 200, $y - 160); pdf_lineto($pdf, $x - 200, 80); pdf_stroke($pdf); pdf_moveto($pdf, $x - 100, $y - 160); pdf_lineto($pdf, $x - 100, 80); pdf_stroke($pdf); pdf_continue_text($pdf, ''); pdf_continue_text($pdf, ''); pdf_show_xy($pdf, "Searched Item", 70, $y - 150); pdf_show_xy($pdf, "Searched Item", 70, $y - 150); pdf_show_xy($pdf, "Item name", 240, $y - 150); pdf_show_xy($pdf, "Item name", 240, $y - 150); pdf_show_xy($pdf, "Price", $x - 180, $y - 150); pdf_show_xy($pdf, "Price", $x - 180, $y - 150); pdf_show_xy($pdf, "Discounted Price", $x - 100, $y - 150); pdf_show_xy($pdf, "Discounted Price", $x - 100, $y - 150); for ($i = 0; $i < count($this->quotation_details); $i++) { pdf_show_xy($pdf, $this->quotation_details[$i]['name_searched'] , 70, $y - 500); } // and write some text under it pdf_show_xy($pdf, "", 250, 35); // end page pdf_end_page($pdf); // close and save file pdf_close($pdf);

    Read the article

  • GLFW 3 initialized, yet not?

    - by mSkull
    I'm struggling with creating a window with the GLFW 3 function, glfwCreateWindow. I have set an error callback function, that pretty much just prints out the error number and description, and according to that the GLFW library have not been initialized, even though the glfwInit function just returned success? Here's an outtake from my code // Error callback function prints out any errors from GFLW to the console static void error_callback( int error, const char *description ) { cout << error << '\t' << description << endl; } bool Base::Init() { // Set error callback /*! * According to the documentation this can be use before glfwInit, * and removing won't change anything anyway */ glfwSetErrorCallback( error_callback ); // Initialize GLFW /*! * This return succesfull, but... */ if( !glfwInit() ) { cout << "INITIALIZER: Failed to initialize GLFW!" << endl; return false; } else { cout << "INITIALIZER: GLFW Initialized successfully!" << endl; } // Create window /*! * When this is called, or any other glfw functions, I get a * "65537 The GLFW library is not initialized" in the console, through * the error_callback function */ window = glfwCreateWindow( 800, 600, "GLFW Window", NULL, NULL ); if( !window ) { cout << "INITIALIZER: Failed to create window!" << endl; glfwTerminate(); return false; } // Set window to current context glfwMakeContextCurrent( window ); ... return true; } And here's what's printed out in the console INITIALIZER: GLFW Initialized succesfully! 65537 The GLFW library is not initialized INITIALIZER: Failed to create window! I think I'm getting the error because of the setup isn't entirely correct, but I've done the best I can with what I could find around the place I downloaded the windows 32 from glfw.org and stuck the 2 includes files from it into minGW/include/GLFW, the 2 .a files (from the lib-mingw folder) into minGW/lib and the dll, also from the lib-mingw folder, into Windows/System32 In code::blocks I have, from build options - linker settings, linked the 2 .a files from the download. I believe I need to link more things, but I can figure out what, or where I should get those things from.

    Read the article

  • Concatenate & Minify JS on the fly OR at build time - ASP.NET MVC

    - by Charlino
    As an extension to this question here Linking JavaScript Libraries in User Controls I was after some examples of how people are concatinating & minifying javascript on the fly OR at build time. I would also like to see how it then works into your master pages. I don't mind page specific files being minified and linked inidividually as they currently are (see below) but all the js files on the main master page (I have about 5 or 6) I would like concatenated and minified. Bonus points for anyone who also incorporates CSS concatenation & minification! :-) Current master page with the common js files that I would like concatenated & minified: <%@ Master Language="C#" Inherits="System.Web.Mvc.ViewMasterPage" %> <head runat="server"> ... BLAH ... <asp:ContentPlaceHolder ID="AdditionalHead" runat="server" /> ... BLAH ... <%= Html.CSSBlock("/styles/site.css") %> <%= Html.CSSBlock("/styles/jquery-ui-1.7.1.css") %> <%= Html.CSSBlock("/styles/jquery.lightbox-0.5.css") %> <%= Html.CSSBlock("/styles/ie6.css", 6) %> <%= Html.CSSBlock("/styles/ie7.css", 7) %> <asp:ContentPlaceHolder ID="AdditionalCSS" runat="server" /> </head> <body> ... BLAH ... <%= Html.JSBlock("/scripts/jquery-1.3.2.js", "/scripts/jquery-1.3.2.min.js") %> <%= Html.JSBlock("/scripts/jquery-ui-1.7.1.js", "/scripts/jquery-ui-1.7.1.min.js") %> <%= Html.JSBlock("/scripts/jquery.validate.js", "/scripts/jquery.validate.min.js") %> <%= Html.JSBlock("/scripts/jquery.lightbox-0.5.js", "/scripts/jquery.lightbox-0.5.min.js") %> <%= Html.JSBlock("/scripts/global.js", "/scripts/global.min.js") %> <asp:ContentPlaceHolder ID="AdditionalJS" runat="server" /> </body> Used in a page like this (which I'm happy with): <asp:Content ID="signUpContent" ContentPlaceHolderID="AdditionalJS" runat="server"> <%= Html.JSBlock("/scripts/pages/account.signup.js", "/scripts/pages/account.signup.min.js") %> </asp:Content> EDIT: What I'm using now Since asking this question, Microsoft have released their own JS & CSS compression library called Microsoft AJAX Minifier, I'd definitely recommend checking it out. It includes MSBuild tasks which are the duck's nuts.

    Read the article

  • Fixing predicated NSFetchedResultsController/NSFetchRequest performance with SQLite backend?

    - by Jaanus
    I have a series of NSFetchedResultsControllers powering some table views, and their performance on device was abysmal, on the order of seconds. Since it all runs on main thread, it's blocking my app at startup, which is not great. I investigated and turns out the predicate is the problem: NSPredicate *somePredicate = [NSPredicate predicateWithFormat:@"ANY somethings == %@", something]; [fetchRequest setPredicate:somePredicate]; I.e the fetch entity, call it "things", has a many-to-many relation with entity "something". This predicate is a filter that limits the results to only things that have a relation with a particular "something". When I removed the predicate for testing, fetch time (the initial performFetch: call) dropped (for some extreme cases) from 4 seconds to around 100ms or less, which is acceptable. I am troubled by this, though, as it negates a lot of the benefit I was hoping to gain with Core Data and NSFRC, which otherwise seems like a powerful tool. So, my question is, how can I optimize this performance? Am I using the predicate wrong? Should I modify the model/schema somehow? And what other ways there are to fix this? Is this kind of degraded performance to be expected? (There are on the order of hundreds of <1KB objects.) EDIT WITH DETAILS: Here's the code: [fetchRequest setFetchLimit:200]; NSLog(@"before fetch"); BOOL success = [frc performFetch:&error]; if (!success) { NSLog(@"Fetch request error: %@", error); } NSLog(@"after fetch"); Updated logs (previously, I had some application inefficiencies degrading the performance here. These are the updated logs that should be as close to optimal as you can get under my current environment): 2010-02-05 12:45:22.138 Special Ppl[429:207] before fetch 2010-02-05 12:45:22.144 Special Ppl[429:207] CoreData: sql: SELECT DISTINCT 0, t0.Z_PK, t0.Z_OPT, <model fields> FROM ZTHING t0 LEFT OUTER JOIN Z_1THINGS t1 ON t0.Z_PK = t1.Z_2THINGS WHERE t1.Z_1SOMETHINGS = ? ORDER BY t0.ZID DESC LIMIT 200 2010-02-05 12:45:22.663 Special Ppl[429:207] CoreData: annotation: sql connection fetch time: 0.5094s 2010-02-05 12:45:22.668 Special Ppl[429:207] CoreData: annotation: total fetch execution time: 0.5240s for 198 rows. 2010-02-05 12:45:22.706 Special Ppl[429:207] after fetch If I do the same fetch without predicate (by commenting out the two lines in the beginning of the question): 2010-02-05 12:44:10.398 Special Ppl[414:207] before fetch 2010-02-05 12:44:10.405 Special Ppl[414:207] CoreData: sql: SELECT 0, t0.Z_PK, t0.Z_OPT, <model fields> FROM ZTHING t0 ORDER BY t0.ZID DESC LIMIT 200 2010-02-05 12:44:10.426 Special Ppl[414:207] CoreData: annotation: sql connection fetch time: 0.0125s 2010-02-05 12:44:10.431 Special Ppl[414:207] CoreData: annotation: total fetch execution time: 0.0262s for 200 rows. 2010-02-05 12:44:10.457 Special Ppl[414:207] after fetch 20-fold difference in times. 500ms is not that great, and there does not seem to be a way to do it in background thread or otherwise optimize that I can think of. (Apart from going to a binary store where this becomes a non-issue, so I might do that. Binary store performance is consistently ~100ms for the above 200-object predicated query.) (I nested another question here previously, which I now moved away).

    Read the article

  • boost fusion: strange problem depending on number of elements on a vector

    - by ChAoS
    I am trying to use Boost::Fusion (Boost v1.42.0) in a personal project. I get an interesting error with this code: #include "boost/fusion/include/sequence.hpp" #include "boost/fusion/include/make_vector.hpp" #include "boost/fusion/include/insert.hpp" #include "boost/fusion/include/invoke_procedure.hpp" #include "boost/fusion/include/make_vector.hpp" #include <iostream> class Class1 { public: typedef boost::fusion::vector<int,float,float,char,int,int> SequenceType; SequenceType s; Class1(SequenceType v):s(v){} }; class Class2 { public: Class2(){} void met(int a,float b ,float c ,char d ,int e,int f) { std::cout << a << " " << b << " " << c << " " << d << " " << e << std::endl; } }; int main(int argn, char**) { Class2 p; Class1 t(boost::fusion::make_vector(9,7.66f,8.99f,'s',7,6)); boost::fusion::begin(t.s); //OK boost::fusion::insert(t.s, boost::fusion::begin(t.s), &p); //OK boost::fusion::invoke_procedure(&Class2::met,boost::fusion::insert(t.s, boost::fusion::begin(t.s), &p)); //FAILS } It fails to compile (gcc 4.4.1): In file included from /home/thechaos/Escriptori/of_preRelease_v0061_linux_FAT/addons/ofxTableGestures/ext/boost/fusion/include/invoke_procedur e.hpp:10, from problema concepte.cpp:11: /home/thechaos/Escriptori/of_preRelease_v0061_linux_FAT/addons/ofxTableGestures/ext/boost/fusion/functional/invocation/invoke_procedure.hpp: I n function ‘void boost::fusion::invoke_procedure(Function, const Sequence&) [with Function = void (Class2::*)(int, float, float, char, int, in t), Sequence = boost::fusion::joint_view<boost::fusion::joint_view<boost::fusion::iterator_range<boost::fusion::vector_iterator<const boost::f usion::vector<int, float, float, char, int, int, boost::fusion::void_, boost::fusion::void_, boost::fusion::void_, boost::fusion::void_>, 0>, boost::fusion::vector_iterator<boost::fusion::vector<int, float, float, char, int, int, boost::fusion::void_, boost::fusion::void_, boost::fus ion::void_, boost::fusion::void_>, 0> >, const boost::fusion::single_view<Class2*> >, boost::fusion::iterator_range<boost::fusion::vector_iter ator<boost::fusion::vector<int, float, float, char, int, int, boost::fusion::void_, boost::fusion::void_, boost::fusion::void_, boost::fusion: :void_>, 0>, boost::fusion::vector_iterator<const boost::fusion::vector<int, float, float, char, int, int, boost::fusion::void_, boost::fusion ::void_, boost::fusion::void_, boost::fusion::void_>, 6> > >]’: problema concepte.cpp:39: instantiated from here /home/thechaos/Escriptori/of_preRelease_v0061_linux_FAT/addons/ofxTableGestures/ext/boost/fusion/functional/invocation/invoke_procedure.hpp:88 : error: incomplete type ‘boost::fusion::detail::invoke_procedure_impl<void (Class2::*)(int, float, float, char, int, int), const boost::fusio n::joint_view<boost::fusion::joint_view<boost::fusion::iterator_range<boost::fusion::vector_iterator<const boost::fusion::vector<int, float, f loat, char, int, int, boost::fusion::void_, boost::fusion::void_, boost::fusion::void_, boost::fusion::void_>, 0>, boost::fusion::vector_itera tor<boost::fusion::vector<int, float, float, char, int, int, boost::fusion::void_, boost::fusion::void_, boost::fusion::void_, boost::fusion:: void_>, 0> >, const boost::fusion::single_view<Class2*> >, boost::fusion::iterator_range<boost::fusion::vector_iterator<boost::fusion::vector< int, float, float, char, int, int, boost::fusion::void_, boost::fusion::void_, boost::fusion::void_, boost::fusion::void_>, 0>, boost::fusion: :vector_iterator<const boost::fusion::vector<int, float, float, char, int, int, boost::fusion::void_, boost::fusion::void_, boost::fusion::voi d_, boost::fusion::void_>, 6> > >, 7, true, false>’ used in nested name specifier However, if I change the number of arguments in the vectors and the method from 6 to 5 from int,float,float,char,int,int to int,float,float,char,int,I can compile it without problems. I suspected about the maximum number of arguments being a limitation, but I tried to change it through defining FUSION_MAX_VECTOR_SIZE without success. I am unable to see what am I doing wrong. Can you reproduce this? Can it be a boost bug (i doubt it but is not impossible)?

    Read the article

  • Why do many software projects fail today?

    - by TomTom
    As long as there are software projects, the world is wondering why they fail so often. I would like to know if there is a list or something equivalent which shows how many software projects fail today. Would be nice if there would be a comparison over the last 20 - 30 years. You can also add your top reason why a software project fails. Mine is "Requirements are poor or not even existing." which includes also "No (real) customer / user involved". EDIT: It is nearly impossible to clearly define the term "fail". Let's say that fail means: The project was more than 10% over budget and time. In my opinion the 10% + / - is a good range for an offer / tender. EDIT: Until now (Feb 11) it seems that most posters agree that a fail of the project is basically a failure of the project management (whatever fail means). But IMHO it comes out, that most developers are not happy with this situation. Perhaps because not the manager get penalized when a project was not successful, but the lazy, incompetent developer teams? When I read the posts I can also hear-out that there is a big "gap" between the developer side and the managment side. The expectations (perhaps also the requirements) seem to be so different, that a project cannot be successful in the end (over time / budget; users are not happy; not all first-prio features implemented; too many bugs because developers were forced to implement in too short timeframes ...) I',m asking myself: How can we improve it? Or do we have the possibility to improve it? Everybody seems to be unsatisfied with the way it goes now. Can we close the gap between these two worlds? Should we (the developers) go on strike and fight for "high quality reqiurements" and "realistic / iteration based time shedules"? EDIT: Ralph Westphal and Stefan Lieser have founded a new "community" called: Clean-Code-Developer. The aim of the group is to bring more professionalism into software engineering. Independently if a developer has a degree or tons of years of experience you can be part of this movement. Clean Code Developers live principles like SOLID every day. A professional developer is the biggest reviewer of his own work. And he has an internal value system which helps him to improve and become better. Check it out on: Clean Code Developer EDIT: Our company is doing at the moment a thing called "Application Development and Maintenance Benchmarking". This is a service offered by IBM to get a feedback from someone external on your software engineering process quality etc. When we get the results, I will tell you more about it.

    Read the article

  • InteropServices COMException when executing a .net app from a web CGI script on Windows Server 2003

    - by Kurt W. Leucht
    Disclaimer: I'm completely clueless about .net and COM. I have a vendor's application that appears to be written in .net and I'm trying to wrap it with a web form (a cgi-bin Perl script) so I can eventually launch this vendor's app from a separate computer. I'm on a Windows Server 2003 R2 SE SP1 system and I'm using Apache 2.2 for the web server and ActivePerl 5.10.0.1004 for the cgi script. My cgi script calls the vendor's app that resides on the same machine using the Perl backtick operator. ... $result = "Result: " . `$vendorsPath/$vendorsExecutable $arg1 $arg2`; ... Right now I'm just running IE web browser locally on the server machine and accessing "http://localhost/cgi-bin/myPerlScript.pl". The vendor's app fails and logs a debug message that includes the following stack trace (I changed a couple names so as to not give away the vendor's identity): ... System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Runtime.InteropServices.COMException (0x80043A1D): 0x80040154 - Class not registered --- End of inner exception stack trace --- at System.RuntimeType.InvokeDispMethod(String name, BindingFlags invokeAttr, Object target, Object[] args, Boolean[] byrefModifiers, Int32 culture, String[] namedParameters) at System.RuntimeType.InvokeMember(String name, BindingFlags invokeAttr, Binder binder, Object target, Object[] args, ParameterModifier[] modifiers, CultureInfo culture, String[] namedParameters) at VendorsTool.Engine.Core.VendorsEngine.LoadVendorsServices(String fileName, String& projectCommPath) ... When I run the vendors app from the Windows command line on the server machine with the exact same arguments that the cgi script is passing it runs just fine, so there's something about invoking their app via the web script that is causing a problem. This problem is likely security related because the whole thing runs just fine on a Windows XP Pro machine (both command line and web invocation). I actually developed my web script there and got it completely working there before I tried moving it to the Windows Server 2003 machine. So what's different about the Windows Server 2003 machine that would keep the vendor's .net app from being executed successfully by a web cgi script? Can I fix this problem somehow to make it work on my server or will the vendor have to make a change to their .net app and ship out a new version? I'm probably the only person in the world who is trying to execute this vendor's app from a separate program, so I hate to bother the vendor with the issue if there's a workaround that I can implement myself here on my server machine. Plus, I'm in kind of a hurry and I don't want to wait 4 or 6 months for the vendor to put in a fix and deploy a new version. Thanks for any advise you can give.

    Read the article

  • Automatically extracting inline XSD from WSDL into XSD file(s)

    - by Steven Geens
    I am using a third party Web Service whose definition and implementation are beyond my control. This web service will change in the future. The Web Service should be used to generate an XML file which contains some of the same data (represented by the same XSD types) as the Web Service plus some extra information generated by the program. My approach: create my own XSD referring to the XSD definitions of the WSDL of the called web service (This XSD also includes XSD types for the extra information obviously.) use a Java XML databinding framework (like ADB or JiXB) to generate the databinding classes from my own XSD file from step 1 use a Java SOAP framework (like Axis2 or CXF) with the same databinding framework to generate the databinding classes from the WSDL (This would enable me to use the objects retrieved by the web service directly in the generation of the XML.) The XSD types I am going to use in my own XSD file, but are defined in the WSDL, are subject to change. Whenever they change, I would like to automatically process the XSD and WSDL databinding again. (If the change is significant enough, this might trigger some development effort.(But usually not.)) My problem: In step 1 I need an XSD referring to the same types as used by the Web Service. The WSDL is referring to another WSDL, which is referring to another WSDL etc. Eventually there is an WSDL with the needed inline XSD types. As far as I know there is no way to directly reference the inline XSD types of a WSDL from an XSD. The approach I would think most viable, is to include an extra step in the automatic processing (before the databinding) that extracts the inline XSD from the WSDL into other XSD file(s). These other XSD file(s) can then be referred to by my own XSD file. Things I'd like to avoid: Manually copy pasting the inline XSD into an XSD file (I am looking for an automatic process.) Any manual steps.(Like the determining the WSDL that contains the inline types manually.(The location of that WSDL does change as well.)) Using xsd:any in my own XSD. I would like my own XSD file to be correct. Using a non-Java technology(like .NET) Huge amounts of implementation (but hints on how you would implement such an extraction are welcome anyway) PS: I found some similar questions, but they all had responses like: WTH would you want to do that? That is the reason for my rather large background story.

    Read the article

  • HOWTO: disable jmx in activemq network of brokers (spring, xbean)

    - by subes
    Since I've struggled a lot with this problem, I am posting my solution. Disabling jmx in an activemq network of brokers removes race conditions about the registration of the jmx connector. When starting multiple activemq servers on the same machine: Failed to start jmx connector: Cannot bind to URL [rmi://localhost:1099/jmxrmi]: javax.naming.NameAlreadyBoundException: jmxrmi [Root exception is java.rmi.AlreadyBoundException: jmxrmi] Another problem with this is, that even if you don't cause a race condition, this exception can still occur. Even when starting one broker after another while waiting for them to initialize properly in between. If one process is run by root as the first instance and the other as a normal user, somehow the user process tries to register its own jmx connector, though there already is one. Or another exception which happens when the broker that successfully registered the jmx connector goes down: Failed to start jmx connector: Cannot bind to URL [rmi://localhost:1099/jmxrmi]: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused] Those exceptions cause the network of brokers to stop working, or to not work at all. The trick to disable jmx was, that jmx had to be disabled in the connectionfactory aswell. The documentation http://activemq.apache.org/jmx.html does not say that this is needed explicitly. So I had to struggle for 2 days until I found the solution: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:amq="http://activemq.apache.org/schema/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core-5.3.1.xsd"> <!-- Spring JMS Template --> <bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate"> <constructor-arg ref="connectionFactory" /> </bean> <!-- Caching, sodass das jms template überhaupt nutzbar ist in sachen performance --> <bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory"> <constructor-arg ref="amqConnectionFactory" /> <property name="exceptionListener" ref="jmsExceptionListener" /> <property name="sessionCacheSize" value="1" /> </bean> <!-- Jeder Client verbindet sich mit seinem eigenen broker, broker sind untereinander vernetzt. Nur wenn hier nochmals jmx deaktiviert wird, bleibt es auch deaktiviert... --> <amq:connectionFactory id="amqConnectionFactory" brokerURL="vm://broker:default?useJmx=false" /> <!-- Broker suchen sich einen eigenen Port und sind gegenseitig verbunden, ergeben dadurch ein Grid. Dies zwar etwas langsamer, aber dafür ausfallsicherer. Siehe http://activemq.apache.org/networks-of-brokers.html --> <amq:broker useJmx="false" persistent="false"> <!-- Wird benötigt um JMX endgültig zu deaktivieren --> <amq:managementContext> <amq:managementContext connectorHost="localhost" createConnector="false" /> </amq:managementContext> <!-- Nun die normale Konfiguration für Network of Brokers --> <amq:networkConnectors> <amq:networkConnector networkTTL="1" duplex="true" dynamicOnly="true" uri="multicast://default" /> </amq:networkConnectors> <amq:persistenceAdapter> <amq:memoryPersistenceAdapter /> </amq:persistenceAdapter> <amq:transportConnectors> <amq:transportConnector uri="tcp://localhost:0" discoveryUri="multicast://default" /> </amq:transportConnectors> </amq:broker> With this, there is no need to specify -Dcom.sun.management.jmxremote=false for the jvm. Which somehow also didn't work for me, because the connectionfactory started the jmx connector.

    Read the article

  • Java, Jacob and Microsoft Outlook events: Receiving "Can't find event iid" Error

    - by Adam Paynter
    I am writing a Java program that interacts with Microsoft Outlook using the Jacob library (bridges COM and Java). This program creates a new MailItem, displaying its Inspector window to the user. I wish to subscribe to the inspector's Close event to know when the user is finished editing their mail item. To subscribe to the event, I followed the instructions in Jacob's documentation (about 2⁄3 down the page): The current [event] model is conceptually similar to the Visual Basic WithEvents construct. Basically, I provide a class called com.jacob.com.DispatchEvents which has a constructor that takes a source object (of type com.jacob.com.Dispatch) and a target object (of any type). The source object is queried for its IConnectionPointContainer interface and I attempt to obtain an IConnectionPoint for its default source interface (which I obtain from IProvideClassInfo). At the same time, I also create a mapping of DISPID's for the default source interface to the actual method names. I then use the method names to get jmethodID handles from the target Java object. All event methods currently must have the same signature: one argument which is a Java array of Variants, and a void return type. Here is my InspectorEventHandler class, conforming to Jacob's documentation: public class InspectorEventHandler { public void Activate(Variant[] arguments) { } public void BeforeMaximize(Variant[] arguments) { } public void BeforeMinimize(Variant[] arguments) { } public void BeforeMove(Variant[] arguments) { } public void BeforeSize(Variant[] arguments) { } public void Close(Variant[] arguments) { System.out.println("Closing"); } public void Deactivate(Variant[] arguments) { } public void PageChange(Variant[] arguments) { } } And here is how I subscribe to the events using this InspectorEventHandler class: Object outlook = new ActiveXComponent("Outlook.Application"); Object mailItem = Dispatch.call(outlook, "CreateItem", 0).getDispatch(); Object inspector = Dispatch.get(mailItem, "GetInspector").getDispatch(); InspectorEventHandler eventHandler = new InspectorEventHandler(); // This supposedly registers eventHandler with the inspector new DispatchEvents((Dispatch) inspector, eventHandler); However, the last line fails with the following exception: Exception in thread "main" com.jacob.com.ComFailException: Can't find event iid at com.jacob.com.DispatchEvents.init(Native Method) at com.jacob.com.DispatchEvents.(DispatchEvents.java) at cake.CakeApplication.run(CakeApplication.java:30) at cake.CakeApplication.main(CakeApplication.java:15) couldn't get IProvideClassInfo According to Google, a few others have also received this error. Unfortunately, none of them have received an answer. I am using version 1.7 of the Jacob library, which claims to prevent this problem: Version 1.7 also includes code to read the type library directly from the progid. This makes it possible to work with all the Microsoft Office application events, as well as IE5 events. For an example see the samples/test/IETest.java example. I noticed that the aforementioned IETest.java file subscribes to events like this: new DispatchEvents((Dispatch) ieo, ieE,"InternetExplorer.Application.1"); Therefore, I tried subscribing to my events in a similar manner: new DispatchEvents((Dispatch) inspector, eventHandler, "Outlook.Application"); new DispatchEvents((Dispatch) inspector, eventHandler, "Outlook.Application.1"); new DispatchEvents((Dispatch) inspector, eventHandler, "Outlook.Application.12"); All these attempts failed with the same error.

    Read the article

  • Can I force JAXB not to convert " into &quot;, for example, when marshalling to XML?

    - by Elliot
    I have an Object that is being marshalled to XML using JAXB. One element contains a String that includes quotes ("). The resulting XML has &quot; where the " existed. Even though this is normally preferred, I need my output to match a legacy system. How do I force JAXB to NOT convert the HTML entities? -- Thank you for the replies. However, I never see the handler escape() called. Can you take a look and see what I'm doing wrong? Thanks! package org.dc.model; import java.io.IOException; import java.io.Writer; import javax.xml.bind.JAXBContext; import javax.xml.bind.JAXBException; import javax.xml.bind.Marshaller; import org.dc.generated.Shiporder; import com.sun.xml.internal.bind.marshaller.CharacterEscapeHandler; public class PleaseWork { public void prettyPlease() throws JAXBException { Shiporder shipOrder = new Shiporder(); shipOrder.setOrderid("Order's ID"); shipOrder.setOrderperson("The woman said, \"How ya doin & stuff?\""); JAXBContext context = JAXBContext.newInstance("org.dc.generated"); Marshaller marshaller = context.createMarshaller(); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE); marshaller.setProperty(CharacterEscapeHandler.class.getName(), new CharacterEscapeHandler() { @Override public void escape(char[] ch, int start, int length, boolean isAttVal, Writer out) throws IOException { out.write("Called escape for characters = " + ch.toString()); } }); marshaller.marshal(shipOrder, System.out); } public static void main(String[] args) throws Exception { new PleaseWork().prettyPlease(); } } -- The output is this: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <shiporder orderid="Order's ID"> <orderperson>The woman said, &quot;How ya doin &amp; stuff?&quot;</orderperson> </shiporder> and as you can see, the callback is never displayed. (Once I get the callback being called, I'll worry about having it actually do what I want.) --

    Read the article

  • javamail error :must issue starttls command first

    - by bobby
    im trying to send a mail using javamail api using the below code:when i compiled the class file im getting the below error which says 'must issue starttls command first' i have mentioned the error below. and also getProvider() function error i think so...i dont know what the errors mean. import javax.servlet.*; import javax.servlet.http.*; import java.io.*; import javax.mail.*; import javax.mail.internet.*; import javax.mail.event.*; import javax.mail.Authenticator; import java.net.*; import java.util.Properties; public class mailexample { public static void main (String args[]) throws Exception { String from = args[0]; String to = args[1]; try { Properties props=new Properties(); props.put("mail.transport.protocol", "smtp"); props.put("mail.smtp.host","smtp.gmail.com"); props.put("mail.smtp.port", "25"); props.put("mail.smtp.auth", "true"); javax.mail.Authenticator authenticator = new javax.mail.Authenticator() { protected javax.mail.PasswordAuthentication getPasswordAuthentication() { return new javax.mail.PasswordAuthentication("[email protected]", "pass"); } }; Session sess=Session.getDefaultInstance(props,authenticator); sess.setDebug (true); Transport transport =sess.getTransport ("smtp"); Message msg=new MimeMessage(sess); msg.setFrom(new InternetAddress(from)); msg.addRecipient(Message.RecipientType.TO, new InternetAddress(to)); msg.setSubject("Hello JavaMail"); msg.setText("Welcome to JavaMail"); transport.connect(); transport.send(msg); } catch(Exception e) { System.out.println("err"+e); } } } error: C:\Users\bobby\Desktopjava mailexample [email protected] abc@gmail. com DEBUG: getProvider() returning javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.s mtp.SMTPTransport,Sun Microsystems, Inc] DEBUG SMTP: useEhlo true, useAuth true DEBUG SMTP: useEhlo true, useAuth true DEBUG: SMTPTransport trying to connect to host "smtp.gmail.com", port 25 DEBUG SMTP RCVD: 220 mx.google.com ESMTP q10sm12956046rvp.20 DEBUG: SMTPTransport connected to host "smtp.gmail.com", port: 25 DEBUG SMTP SENT: EHLO bobby-PC DEBUG SMTP RCVD: 250-mx.google.com at your service, [60.243.184.29] 250-SIZE 35651584 250-8BITMIME 250-STARTTLS 250 ENHANCEDSTATUSCODES DEBUG: getProvider() returning javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.s mtp.SMTPTransport,Sun Microsystems, Inc] DEBUG SMTP: useEhlo true, useAuth true DEBUG SMTP: useEhlo true, useAuth true DEBUG: SMTPTransport trying to connect to host "smtp.gmail.com", port 25 DEBUG SMTP RCVD: 220 mx.google.com ESMTP l29sm12930755rvb.16 DEBUG: SMTPTransport connected to host "smtp.gmail.com", port: 25 DEBUG SMTP SENT: EHLO bobby-PC DEBUG SMTP RCVD: 250-mx.google.com at your service, [60.243.184.29] 250-SIZE 35651584 250-8BITMIME 250-STARTTLS 250 ENHANCEDSTATUSCODES DEBUG SMTP SENT: MAIL FROM: DEBUG SMTP RCVD: 530 5.7.0 Must issue a STARTTLS command first. l29sm12930755rvb .16 DEBUG SMTP SENT: QUIT errjavax.mail.SendFailedException: Sending failed; nested exception is: javax.mail.MessagingException: 530 5.7.0 Must issue a STARTTLS command f irst. l29sm12930755rvb.16

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >