Search Results

Search found 30080 results on 1204 pages for 'iframe app'.

Page 537/1204 | < Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >

  • Anyone know a good mind mapper that works with a scheduler?

    - by GLycan
    TL;DR: Mind mapping tasks to be processed into a schedule based on task metadata. I have all sorts of ideas about what to invest resources (mainly time) in, but when I actually have time to do something I useually end up browsing reddit for not knowing what do to, and the frequancy with which I forget deadlines scares me. I'd love to bring order and structure into my mind, and always know what to do next. So, I want a mind mapping app, where I'd give each branch (types and subtypes of things I want to do) a importance score (if there were two branches, and one had 60 while the other 40, they would respectivily get 60% and 40% of the parent's importance, with the root being 100) and a how soon that branch should be revised/updated (an hobby I want to try out might be checked, say, once a week, while a school subject should be checked once a day) and give each leaf (something I want/need to do) how much time it takes, deadline (if any), and optionally an absolute importance, reoccurrence (guitar practice might repeat once a week), and prerequisites (reading something requires that book (although that could be brought somewhere), coding requires a box, jogging requires being outside) and maybe some other flags, like if it's enjoyable or not. It should either be packaged or working with a schedular app, to which I'd say, look, my day works this way (completely busy from 8 to 9:15, then 15 minutes of being inside with nothing, ..., two hours with box and possibility to go outside, etc), saying that such-and-such pattern is school and happens ever weekday except such-and-such days. The output should be of the form of a schedule, fit for printing or, when I finally get an android, mobile viewing, that schedules tasks with regards to availability of resources and importance (importance being derived from the leaf-task's parent branches), and the set of flags (all work and no play makes me a dull boy). One of these tasks should be reviewing anything that should be updated on that day, including future day layouts (e.g, if the time slots of future days have changed. This should be done every day.) Does anyone know some collection of preferably open-source (or free, or pirateable) tools, or better yet a single one, that accomplishes this task? I know python pretty well, and should be able to write any necessary glue.

    Read the article

  • How to configure a trusted connection between IIS 7 and SQL Server 2005?

    - by user1180652
    How do configure a trusted connection between IIS 7 and SQL Server 2005? My webapp was working fine with Windows Authentication enabled in IIS. Now, in order to solve a problem, we need to use a trusted connection. Unfortunately, enabling the trusted connection in the web.config broke the webapp. Oddly enough, when I run this application with trusted connection from my local dev machine (using the Cassini web server) IIS (Windows Server 2008) is running on one machine. The database (SQL Server 2005 but could migrate to 2008) is running on another machine. We are on a Windows domain running AD. All traffic is within our own firewall - no public access. Beyond that, I can't provide much info but I can find it. We're very "compartmentalized" (we have server people, security people, oracle people, SQL Server people, etc.) Thanks! Update 02/14/2012 0902: The webapp is now functional (app no longer broken) but the main issue is still unresolved. Now I have the app's application pool running as a domain account with permissions on the SQL Server box and IIS box. We were using this account to run the application but, and here's the problem, we need to log the real user name that made a change. When using the service account, the name of that service account appeared in the audit tables, making the auditing quite useless. So, not I'm at least running again. The connection string in the web.config is using "Trusted_Connection=True", the appPool is using a domain account with access to both boxes, BUT when I make a change (logged in as me) the name of the service account (appPool identity) is still logged in the audit tables. I also manually granted full permissions to the service account on the webapp folder. What do I need to do in order to log my name, not the service account, in the audit tables? Everything I'm reading says I need to establish a trusted connection between the two servers.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for October 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook Page for October 2012. OAM/OVD JVM Tuning | @FusionSecExpert Vinay from the Oracle Fusion Middleware Architecture Group (known as the A-Team) shares a process for analyzing and improving performance in Oracle Virtual Directory and Oracle Access Manager. SOA Galore: New Books for Technical Eyes Only Shake up up your technical skills with this trio of new technical books from community members covering SOA and BPM. Clustering ODI11g for High-Availability Part 1: Introduction and Architecture | Richard Yeardley "JEE agents can be deployed alongside, or instead of, standalone agents," says Rittman Meade's Richard Yeardley. "But there is one key advantage in using JEE agents and WebLogic – when you deploy JEE agents as part of a WebLogic cluster they can be configured together to form a high availability cluster." Learn more in Yeardley's extensive post. Solving Big Problems in Our 21st Century Information Society | Irving Wladawsky-Berger "I believe that the kind of extensive collaboration between the private sector, academia and government represented by the Internet revolution will be the way we will generally tackle big problems in the 21st century. Just as with the Internet, governments have a major role to play as the catalyst for many of the big projects that the private sector will then take forward and exploit. The need for high bandwidth, robust national broadband infrastructures is but one such example." -- Irving Wladawsky-Berger Eventually, 90% of tech budgets will be outside IT departments | ZDNet Another interesting post from ZDNet blogger Joe McKendrick about changing roles in IT. ADF Mobile - Login Functionality | Andrejus Baranovskis "The new ADF Mobile approach with native deployment is cool when you want to access phone functionality (camera, email, sms and etc.), also when you want to build mobile applications with advanced UI," reports Oracle ACE Director Andrejus Baranovskis. Podcast: Are You Future Proof? - Part 2 In Part 2, practicing architects and Oracle ACE Directors Ron Batra (AT&T), Basheer Khan (Innowave Technology), and Ronald van Luttikhuizen (Vennster) discuss re-tooling one’s skill set to reflect changes in enterprise IT, including the knowledge to steer stakeholders around the hype to what's truly valuable. ADF Mobile Custom Javascript — iFrame Injection | John Brunswick The ADF Mobile Framework provides a range of out of the box components to add within your AMX pages, according to John Brunswick. But what happens when "an out of the box component does not directly fulfill your development need? What options are available to extend your application interface?" John has an answer. Oracle Solaris 11.1 update focuses on database integration, cloud | Mark Fontecchio TechTarget editor Mark Fontecchio reports on the recent Oracle Solaris 11.1 release, with comments from IDC's Al Gillen. Architects Matter: Making sense of the people who make sense of enterprise IT Why do architects matter? Oracle Enterprise Architect Eric Stephens suggests that you ask yourself this question the next time you take the elevator to the Oracle offices on the 45th floor of the Willis Tower in Chicago, Illinois (or any other skyscraper, for that matter). If you had to take the stairs to get to those offices, who would you blame? "You get the picture," he says. "Architecture is essential for any necessarily complex structure, be it a building or an enterprise." (Read the article) Thought for the Day "I will contend that conceptual integrity is the most important consideration in system design. It is better to have a system omit certain anomalous features and improvements, but to reflect one set of design ideas, than to have one that contains many good but independent and uncoordinated ideas." — Frederick P. Brooks Source: SoftwareQuotes.com

    Read the article

  • Load Balancing Rails on Apache 2.x

    - by revgum
    My situation is that I need to proxy traffic to the root of my web server to port 81 for IIS, and then any traffic to a sub-directory needs to be directed to the rails app. my-server.com/ - needs to proxy to port 81 my-server.com/myapp - needs to point to the rails app This seems to be working alright for the rails application but the images, javascripts, and stylesheets are not actually working (proxied). I've tried to fiddle with the proxypass lines but it still doesn't work for me..can anyone help? Here's my complete VirtualHost portion of the config; LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so ProxyRequests off <Proxy balancer://myapp_cluster> BalancerMember http://127.0.0.1:3001 BalancerMember http://127.0.0.1:3002 </Proxy> <VirtualHost *:80> DocumentRoot "c:\ruby\apps\myapp\public" <Directory /myapp > Options FollowSymLinks AllowOverride None </Directory> ProxyPass /myapp/images ! ProxyPass /myapp/stylesheets ! ProxyPass /myapp/javascripts ! ProxyPass /myapp/ balancer://myapp_cluster/ ProxyPassReverse /myapp/ balancer://myapp_cluster/ ProxyPreserveHost on ProxyPass / http://localhost:81/ ErrorLog "c:\ruby\apps\myapp\log\error.log" # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog "c:\ruby\apps\myapp\log\access.log" combined </VirtualHost>

    Read the article

  • My .htaccess file re-directed problems?

    - by Glenn Curtis
    I am hoping you can help me! Below is my .htaccess files for my Apache server running on top of Ubuntu server. This is my local server which I installed so I can develop my site on this instead of using my live site! However i have all my files and the database on my localhost now but each time I access my server, vaio-server (its a sony laptop), it just takes me to my live site! Now eveything is in the root of Apache, /var/www - its the only site I will develop on this system so I don't need to config this to look at any many than this one site! I think thats all, all the Apache files, site-available/default ect are as standard. - Please Help!! Many Thanks Glenn Curtis. DirectoryIndex index.php index.html # Upload sizes php_value upload_max_filesize 25M php_value post_max_size 25M # Avoid folder listings Options -Indexes <IfModule mod_rewrite.c> Options +FollowSymlinks RewriteEngine on RewriteBase / # Maintenance #RewriteCond %{REQUEST_URI} !/maintenance.html$ #RewriteRule $ /maintenance.html [R=302,L] #Redirects to www #RewriteCond %{HTTP_HOST} !^vaio-server [NC] #RewriteCond %{HTTPS}s ^on(s)|off #RewriteRule ^(.*)$ glenns-showcase.net/$1 [R=301,QSA,L] #Empty string RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule>

    Read the article

  • .NET 2.0 Application now running slow on IIS 7.5

    - by Valien
    I recently moved (and still in testing) an application from a Windows 2003 Server (Physical box) running IIS 6.x to a Windows 2008 R2 Standard (VM) IIS 7.5 server. The application is a .NET framework 2.0 application and is running under a 2.0 App Pool. This site works great except for one thing: Takes forever to get a request back. I've been tracking it with Chrome Inspect Element and it queries the site and can take up to 45 seconds to answer. Now when it does the page(s) render instantly but it's that initial request that's killing it. I see no error logs or issues with the application or Windows Event Viewer or even IIS logs so not sure where to start looking next. Some new changes was that previously the app resided behind a Pix firewall and now is behind a larger network environment in a DMZ zone (and I believe NetScaler is also being used to manage the network). I do not have rights/abilities to look at the network itself but can contact the Data center folks to look deeper into this but I wanted to make sure it's not my application that might be causing the slowdown or IIS. In summary: .NET 2.0 application works great in IIS 6.x Application moved to an IIS 7.5 server and now slow on rendering but when it does render responds back with pages instantly. Edit for solution Found out that it was the SOAP calls that were slowing the site down. In the new datacenter my application cannot request SOAP calls and so they time out after 40-45 seconds or so. Now trying to find out if I can install a proxy server to redirect this...

    Read the article

  • Can I convert an existing Firefox installation to ESR without a re-install?

    - by Iszi
    It took some jumping through hoops (including a mailing list subscription that I apparently didn't need) but I finally found where to download the Firefox ESR. This is great for fresh installs, but I was wondering if there's a way to simply convert existing installations to the ESR configuration without having to do a full install. As I understand it, the only difference between ESR and regular Firefox will be how they receive updates. After the new standard version of Firefox comes out, ESR releases will only receive critical security updates and bug fixes for the remainder of their support life. Newer versions of Firefox's standard build will have all the latest and greatest features, while ESR releases are meant to provide stability for environments that can't be expected to keep up with a new full version number change as often as Mozilla does them. In regular Firefox, the About screen shows that I am using the "release" update channel. Is switching to ESR really just a matter of switching the update channel? I presume this can be done in about:config by changing app.update.channel and probably also app.update.url. However, I don't know what these values should be for ESR or if anything else should be tweaked. So, is it possible to switch to ESR without a reinstall and, if so, how? (Note: While this question was written originally for Firefox 10, I expect any answers will apply to future ESR versions as well.)

    Read the article

  • Ruby on Rails cannot find Initializer?

    - by Ryan M.
    Hello, I am trying to deploy an app to a fresh Ubuntu 10 installation using Passenger 2.2.15, Rails 2.3.5, Ruby 1.8.7, and Apache 2.2.14. However, even with a default rails app (sudo rails defaultapp), I am receiving the following error: "no such file to load -- initializer". I'm not sure which files you might need copies of in order to diagnose this problem, so I'll copy a few here and hope that it will help. Thanks for any help you can provide. -RM /etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/appname/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> /etc/apache2/mods-available/passenger.conf <IfModule passenger_module> PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-2.2.15 PassengerRuby /usr/bin/ruby1.8 </IfModule> /etc/apache2/mods-available/passenger.load LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-2.2.15/ext/apache2/mod_passenger.so

    Read the article

  • using a second computer as a mere screen/monitor in X (VNC?)

    - by lara michaels
    Hello My goal is to use three monitors with my Linux system. It is a laptop, so adding another video card is not the easiest solution. (I have investigated a number of such options: getting a docking station with a PCI slot, USB/Cardbus vga adapters, etc, and for the time being don't want to go that way.) I am wondering if using an older desktop+screen I have lying around as the third "monitor" might be the easiest solution, if only there is a way to get it to work as a seamless, integrated desktop. I was wondering if I can use VNC or perhaps X itself (?) to achieve the following: computer A is my main computer; it has all my files, etc. computer B is used just to display on an additional screen keyboard+mouse are connected to computer A use VNC or X to connect the two so that computer B shows a X screen that is just as if it was a third physical screen connected to computer A. I don't know if the last point is clear, but what I mean is that I would like to be able to: be able to have my window manager assign/move around virtual desktops on all three screens move windows back and forth between the screens attached to computer A and the screen of computer B be able to copy something in an app being shown on a screen of computer A and paste it into an app being shown on the screen attached to computer B access the filesystem on my main computer (A) when using applications that are being shown on the screen attached to computer B Basically, I would like X to treat computer B just like it was nothing but a third physical screen... Is this doable? : ) ~lara

    Read the article

  • CSS and JS files not being updated, supposedly because of Nginx Caching

    - by Alberto Elias
    I have my web app working with AppCache and I would like that when I modify my html/css/js files, and then update my Cache Manifest, that when the user accesses my web app, they will have an updated version of those files. If I change an HTML file, it works perfectly, but when I change CSS and JS files, the old version is still being used. I've been checking everything out and I think it's related to my nginx configuration. I have a cache.conf file that contains the following: gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; } And in default.conf I have my locations. I would like to have this caching working on all locations except one, how could I configure this? I've tried the following and it isn't working: location /dir1/dir2/ { root /var/www/dir1; add_header Pragma "no-cache"; add_header Cache-Control "private"; expires off; } Thanks

    Read the article

  • IIS replication - Is it possible

    - by Ian
    Hi All, I have a requirement for a client that I have a centralised system that all his satellite branches can work on. Currently this is a ASP.net web forms app running under IIS 7 on win 2008 RC 2 using an SQL backend. The client has now requested that each branch have a local server, so that in the event that the internet connection is down, the branches productivity does not suffer. His other request is that everything can be updated via the central hub and using some mechanism the updates filter down to the individual sites. What are my options here? I see the following as possible options: Multiple redundant internet connections controlled by load balancers SQL replication for the DB (What is better, snapshot, merge or transactional) Roll my own IIS sync service the periodically checks if there is a new version of the web app and downloads it (I hope there are better option than this) Something way better I don’t yet know about (I hope this is the one I need) One of my clients concerns are that the branches are often in very remote areas where everything from technicians to internet is hard to find and very scarce. Any ideas, suggestions, tips etc are welcome. Thanks all

    Read the article

  • Windows 8 keeps signing out

    - by bill weaver
    Ran into a strange problem with Windows 8 Pro. Last night i installed Windows 8 Pro as an upgrade on a Sony Vaio laptop that had Windows 7 Pro on it. The install seemed to go okay. Then once installed, live tiles seem to work, native/Metro apps will start okay, but pretty soon after going into an app or settings, the screen flashes a few times and we're back to the lock screen. Signing in appears to do a full login. I've tried this with a local account and with a live.com account. This is someone else's laptop, so we decided to let it breathe, in case the install was still settling in. Well, they say today it's doing the same thing. Open the music app, and within a minute it's back to signon/lock screen. However, they can go to the actual Desktop and run Zune to play music, and it seems happy. In the past, i've installed retail Windows 8 Pro clean on a homebrew system, as an upgrade on a Dell laptop with a zillion apps and drivers, neither with any problems. Also, i've had the consumer preview and release candidate installed as well, no problems. Any ideas on what's going on here?

    Read the article

  • Microsoft Licensing Scenario/Questions [closed]

    - by user17455
    Possible Duplicate: Can you help me with my software licensing question? I am a member of a team developing a third party application (APP) that listens for and services connections from remote devices via TCP. Also, some of these remote devices allow 1 or more users to interact with the remote device. On some of the remote devices, it is impossible for a user to interact with the device. The user/remote device makes no use of any Windows Server service - not DHCP, not IIS, not File Server, not Print Serer, not AD. The remote device's only connection to the Windows Server machine is through the APP's TCP ports. Our company has no interaction with Microsoft. We do not have a Microsoft sales team. Past inquiries have determined that it is cheaper for us to buy Microsoft software (and CALs) retail than to enter into any kind of "arrangement" with Microsoft. I have many questions about SQL Server CALs and Windows Server 2008 CALs. How can I obtain authoritative/legally binding answers? I am not looking for FREE legal advice. I AM looking for FREE advice about who/what/where I can responsibly spend my money to get meaningful information. I fear that passing this on to the local company law firm will just mean that I will be paying them to educate themselves on Microsoft licensing. And if that's like writing code to a new Microsoft API - they are not going to get it right the first time. Going to Microsoft for answers sounds like swimming up to a hungry shark and asking "One leg or two?" I am hoping someone has been down this road before and knows a law firm/lawyer that is experienced in these matters. Any help/suggestion welcome. Thanks.

    Read the article

  • Ubiquitous BIP

    - by Tim Dexter
    The last number I heard from Mike and the PM team was that BIP is now embedded in more than 40 oracle products. That's a lot of products to keep track of and to help out with new releases, etc. Its interesting to see how internal Oracle product groups have integrated BIP into their products. Just as you might integrate BIP they have had to make a choice about how to integrate. 1. Library level - BIP is a pure java app and at the bottom of the architecture are a group of java libraries that expose APIs that you can use. they fall into three main areas, data extraction, template processing and formatting and delivery. There are post processing capabilities but those APIs are embedded withing the template processing libraries. Taking this integration route you are going to need to manage templates, data extraction and processing. You'll have your own UI to allow users to control all of this for themselves. Ultimate control but some effort to build and maintain. I have been trawling some of the products during a coffee break. I found a great post on the reporting capabilities provided by BIP in the records management product within WebCenter Content 11g. This integration falls into the first category, content manager looks after the report artifacts itself and provides you the UI to manage and run the reports. 2. Web Service level - further up in the stack is the web service layer. This is sitting on the BI Publisher server as a set of services, runReport and scheduleReport are the main protagonists. However, you can also manage the reports and users (locally managed) on the server and the catalog itself via the services layer.Taking this route, you still need to provide the user interface to choose reports and run them but the creation and management of the reports is all handled by the Publisher server. I have worked with a few customer on this approach. The web services provide the ability to retrieve a list of reports the user can access; then the parameters and LOVs for the selected report and finally a service to submit the report on the server. 3. Embedded BIP server UI- the final level is not so well supported yet. You can currently embed a report and its various levels of surrounding  'chrome' inside another html based application using a URL. Check the docs here. The look and feel can be customized but again, not easy, nor documented. I have messed with running the server pages inside an IFRAME, not bad, but not great. Taking this path should present the least amount of effort on your part to get BIP integrated but there are a few gotchas you need to get around. So a reasonable amount of choices with varying amounts of effort involved. There is another option coming soon for all you ADF developers out there, the ability to drop a BIP report into your application pages. But that's for another post.

    Read the article

  • limiting connections from tomcat to IIS - proxy? iptables?

    - by Chris Phillips
    Howdy, I've webapp on tomcat6 which is connecting to an M$ PlayReady DRM instance on IIS6.0 The performance is seen to be best when we bench mark (using ab) the DRM service with 25 concurrent connections, which gives about 250 requests per second, which is ace. higher concurrent connections results in TCP/IP timeouts and other lower level mess. But there is no way to control how the tomcat app connects to the service - it's not internally managing a pool of connections etc, they are all isolated http connections to the server. Ideally I'd like a situation where we can have 25 http 1.1 connections being kept alive permanently from tomcat and requesting the licenses through this static pool of connections, which I think would the best performance. But this is not in the code, so was looking for a way to possibly simulate this at the Linux level. I was possibly thinking that iptables connlimit might be able to gracefully handle these connections, but whilst it could limit, it'd probably still annoy the app. What about a proxy? nginx (or possibly squid) seems potentially appealing to run on the tomcat server and hit on localhost as we might want to add additional DRM servers to use under load balance anyway. Could this take 100 incoming connections from tomcat, accept them all and proxy over the the IIS server in a more respectful manner? Any other angles? EDIT - looking over mod_proxy for apache, which we are already using for conventional use on an apache instance in front of this tomcat instance, might be ideal. I can set a max value on the proxy_pass to only allow 25 connections, and keep them alive permanently. Is that my answer? Many thanks, Chris

    Read the article

  • ORA- 01157 / Cant connect to database

    - by Tom
    Hi everyone, this is a follow up from this question. Let me start by saying that i am NOT a DBA, so i'm really really lost with this. A few weeks ago, we lost contact with one of our SID'S. All the other services are working, but this one in particular is not. What we got was this message when trying to connect ORA-01033: ORACLE initialization or shutdown in progress An attempt to alter database open ended up in ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' I tried to shutdown / restart the database, but got this message. Total System Global Area 566231040 bytes Fixed Size 1220604 bytes Variable Size 117440516 bytes Database Buffers 444596224 bytes Redo Buffers 2973696 bytes Database mounted. ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' When all continued the same, I erased the dbf files (rm xxx_data.dbf xxx_index.dbf), and recreated them using touch xxx_data.dbf. I also tried to recreate the tablespaces using `CREATE TABLESPACE DATA DATAFILE XXX_DATA.DBF` and got Database not open As I said, i don't know how bad this is, or how far i'm from gaining access to my database (well, to this SID at least, the others are working). I would imagine that a last resource would be to throw everything away, and recreating it, but I don't know how to, and I was hoping there's a less destructive solution. Any help will be greatly appreciated . Thanks in advance.

    Read the article

  • QMail do not delivery to remote mailboxes for my own domain

    - by lorenzo-s
    Sorry for the title, I don't know how to sum up this situation. I have a web server at mydomain.com, running qmail for website related mail delivery (i.e. newsletter, sign up confirmation, etc). qmail here is used only to send mails, because I have a fully working Google App Gmail associated with mydomain.com for normal email receiving. qmail runs fine when sending email to remote addresses, for example to [email protected], but fails when sending to [email protected]. I think it's because the server thinks that he have to manage mailboxes for mydomain.com locally, instead of redirect them to Gmail. Here is the /var/log/qmail/current for two email: the first one is sent without problems to example.com, second one fails because it's for mydomain.com: 2012-11-15 15:04:11.551933500 new msg 262580 2012-11-15 15:04:11.551936500 info msg 262580: bytes 5604 from <[email protected]> qp 5185 uid 33 2012-11-15 15:04:11.575910500 starting delivery 316: msg 262580 to remote [email protected] 2012-11-15 15:04:11.575912500 status: local 0/10 remote 1/20 2012-11-15 15:04:12.189828500 delivery 316: success: 74.125.136.27_accepted_message./Remote_host_said:_250_2.0.0_OK_1352991894_j49si13055539eep.9/ 2012-11-15 15:04:12.189830500 status: local 0/10 remote 0/20 2012-11-15 15:04:12.189831500 end msg 262580 2012-11-15 16:49:20.270332500 new msg 262580 2012-11-15 16:49:20.270336500 info msg 262580: bytes 2192 from <[email protected]> qp 5479 uid 33 2012-11-15 16:49:20.315125500 starting delivery 323: msg 262580 to local [email protected] 2012-11-15 16:49:20.315128500 status: local 1/10 remote 0/20 2012-11-15 16:49:20.320855500 delivery 323: failure: Sorry,_no_mailbox_here_by_that_name._(#5.1.1)/ 2012-11-15 16:49:20.320858500 status: local 0/10 remote 0/20 2012-11-15 16:49:20.372911500 bounce msg 262580 qp 5484 2012-11-15 16:49:20.372914500 end msg 262580 As you can see, it says: Sorry,_no_mailbox_here_by_that_name I can't say he's wrong :) How to solve this? How to let Google App Gmail manage incoming email for mydomain.com for messages sent by mydomain.com qmail server?

    Read the article

  • Application runs fine manually but fails as a scheduled task

    - by user42540
    I wasn't sure if this should go here or on stackoverflow. I have an application that loads some files from a network share (the input folder), extracts certain data from them and saves new files (zips them with SharpZLib) on a different network share (output folder). This application runs fine when you open it directly, but when it is set to a scheduled task, it fails in numerous places. This application is scheduled on a Win 2003 server. Let me say right off the bat, the scheduled task is set to use the same login account that I am currently logged in with, so it's not because it's using the LocalSystem account. Something else is going on here. Originally, the application was assigning a drive letter to the input folder using WNetGetConnectionA(). I don't remember why this was done, someone else on our team did that and she's gone now. I think there was some issue with using the WinZip command line with a UNC path. I switched from the WinZip command line utility to using SharpZLib because there were other issues with using the WinZip command line. Anyway, the application failed when trying to assign a drive letter with the error "connection already established." That wasn't true and even after trying WNetCancelConnection(), it still didn't work. Then I decided to just map the drive manually on the server. Then when the app calls Directory.Exists(inputFolderPath) it returns false, even though it does exist. So, for whatever reason, I cannot read this directory from within the application. I can manually navigate to this folder in Windows Explorer and open files. The app log file shows that the user executing it on the schedule is the user I expect, not LocalSystem. Any ideas?

    Read the article

  • Get Safari to use different autocompletion on different URLs on same hostname

    - by Luke404
    I have a webserver publishing different services over the same SSL VirtualHost, the two most commonly used being PhpMyAdmin and Cacti. These (and others) use 'cookie' style authentication, asking user and password in an HTML form (thus not using HTTP Authentication). Being on the same hostname, the Safari browser didn't manage too well stored passwords: if I login to one app with user foo, and then go to app two it would propose me user foo and its password in the login form. Changing just the username to bar used to be sufficient to let Safari autocomplete the correct password in its form field. Annoying, but I could live with it - usernames are short and easy to remember when compared to the passwords we use. After the update to safari5 this seems to be no longer true: if I store in safari (actually user keychain on OSX) credentials for https://www.foobarbaz.com/app1 AND credentials for https://www.foobarbaz.com/app2 there seem to be no way for it to autocomplete both based on the url. Even editing the keychain to add the path (it will store only the hostname by default) does not help. Is there anything I can do to let it work the way I want while still keeping everything on one hostname? Modifying anything server side is of course possible, but I can't switch apps to HTTP Auth (and not every one will support it anyway) to use different 'realms'.

    Read the article

  • Windows 2008 R2 RDS - Double Login

    - by colo_joe
    Issue: Double logins when connecting to RemoteApps or Remote Desktop Environment: Gateway = 1 server 2008 R2 - Roles = Gateway, Session Broker, Connection Mgr, Session Host Configuration server Session hosts = 2 servers 2008 R2 - Roles = App Manager and Session host configuration Testing: I can get to the url http://RDS.domain.com/rdweb - I get prompted for authentication (1) Pass authentication, get list of remote apps. Click on remoteapps or remote desktop, get prompted for authentication again (2). Pass authentication, I get access to app or RDP. Done so far. On session host Signed rdp files with cert. Added the following to the custom RDP settings: Authenticaton level:i:0 = If server authentication fails, connect to the computer without warning (Connect and don’t warn me). prompt for credentials on client:i:1 = RDC will prompt for credentials when connecting to a server that does not support server authentication. enablecredsspsupport:i:1 = RDP will use CredSSP, if the operating system supports CredSSP. Edited the javascript file as found in http://support.microsoft.com/kb/977507 Added Connection ID, and added Web Access server to TS Web Access Computers group on the Session host servers, and Signed apps as found in hxxp://blogs.msdn.com/b/rds/archive/2009/08/11/introducing-web-single-sign-on-for-remoteapp-and-desktop-connections.aspx Note: This double login happens internally and externally.

    Read the article

  • Setting up xpra for client use in OS X

    - by Jonathan
    I've been trying to get xpra to run on OS X for the last few days to connect to my Ubuntu server. Note that there's a GUI for it called shifter, but that (at least on OS X) is still far too buggy. For those who don't know what xpra is, if you know what screen is, it's like screen for GUI X Windows apps tunneled over ssh. You can render a remote X app locally so it's faster than sending a series of compresses screen shots (like VNC), but with xpra you can disconnect and reconnect on different computers. To get the basic functionality you can just type "ssh -X server.location" and any GUI app you open from the command line will open locally. I've been able to get xpra to build by doing the following: Download pari-all-0.0.6.tar.gz from the xpra site listed under upstream and untar it. Issue the following Mac Ports command (Dependencies thanks to RogBlog): sudo port install python25 python26 py26-pyrex py26-gtk xorg-libXtst py25-gobject py25-gtk py25-nose py26-nose xorg-libXdamage xorg-libXcomposite xorg-libXtst xorg-libXfixes In the upstream list of v0.0.06 patches (NOT 0.0.8pre!) on the xpra site listed above, download mswindows-conditional-pyrex.patch. Open the patch with your favorite text editor and change the single occurrence of "win" in it to "darwin". Apply the patch to setup.py. Run do-build in the command line. Now where I'm stumped: how do I run xpra? The build produces a sub directory called install/bin in which xpra is located, but when I try to run it I get the following error: Traceback (most recent call last): File "./xpra", line 4, in import xpra.scripts.main ImportError: No module named xpra.scripts.main There is a file called main.py under xpra/scripts, but I don't know any python and I'm not sure if this is what it's looking for, and what to do with it even if it is. My goal is to set up xpra so I can install it into /usr/bin (or some other common path for executables) and execute it whenever I please. What do I do next?

    Read the article

  • What performance degradation to expect with Nginx over raw Gunicorn+Gevent?

    - by bouke
    I'm trying to get a very high performing webserver setup for handling long-polling, websockets etc. I have a VM running (Rackspace) with 1GB RAM / 4 cores. I've setup a very simple gunicorn 'hello world' application with (async) gevent workers. In front of gunicorn, I put Nginx with a simple proxy to Gunicorn. Using ab, Gunicorn spits out 7700 requests/sec, where Nginx only does a 5000 request/sec. Is such a performance degradation expected? Hello world: #!/usr/bin/env python def application(environ, start_response): start_response("200 OK", [("Content-type", "text/plain")]) return [ "Hello World!" ] Gunicorn: gunicorn -w8 -k gevent --keep-alive 60 application:application Nginx (stripped): user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; upstream app_server { server 127.0.0.1:8000 fail_timeout=0; } server { listen 8080 default; keepalive_timeout 5; root /home/app/app/static; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } } Benchmark: (results: nginx TCP, nginx UNIX, gunicorn) ab -c 32 -n 12000 -k http://localhost:[8000|8080]/ Running gunicorn over a unix socket gives somewhat higher throughput (5500 r/s), but it still does't match raw gunicorn's performance.

    Read the article

  • Google bots are severely affecting site performance

    - by Lynn
    I have an aggregate site on a linux server that pulls in feeds from a universe of about 2,000 blogs. It's in Wordpress 3.4.2 and I have a cron job that is staggered to run five times an hour on another server to pull in the stories and then publish them to the front page of this site. This is so I didn't put too much pressure all on one server. However, the Google bots, which visit a few times every hour bring the server to its knees in the morning and evenings when there is an increase in traffic on the site. The bots have something like 30,000 links to follow at this point. How do I throttle the bots to simply grab the new stories off the front page and stop there? EDIT- Details of my server configuration: The way we have this set up is the server that handles all the publishing is an unmanaged instance via AWS. It mounts the NFS server and connects to the RDS to update content, etc. You get to this publishing instance via a plugin that detects the wp-admin link and then redirects you into there. The front end app server also mounts the NFS and requests data from the RDS. It is the only one that has the WP Super Cache on it.... The OS is Ubuntu on the App server and the NFS runs CentOs. The front end is Nginx and the publishing server is Apache.

    Read the article

  • Doing port forwarding and then using it from within the internal network

    - by Ram Rachum
    We all know that by doing port forwarding on the router, computers from outside the network are able, on the specified ports, to access internal computers by targeting the external IP. I'm now replacing a TP-Link router with a D-link VDSL N 6740U router, (and copied over all the settings,) and I've noticed that one thing stopped working: With the TP-link router, you could access those port-forwarded computers from within the network, using the external IP, and they would be forwarded to the relevant computers. With the new D-Link router, it doesn't work. You might be wondering, why would you want to use the external IP and port forwarding when you're inside the internal network anyway and can just access the internal IP? One example for why this is useful: You have an iPhone app that connects to a service on an internal computer. The iPhone app knows to connect to the external IP. When we put that iPhone inside the internal network (via WiFi), it suddenly stops working, because it can't access the service from the external IP anymore. Is it an inherent property of D-Link routers that they do not allow accessing internal servers from inside the network by targeting the external IP? Or is there a way to make it work?

    Read the article

  • Apple: Bind a key to a commandline command?

    - by Stefan Lasiewski
    I have a Mac Powerbook running Leopard (10.5.8). Does Leopard provide an easy way to bind keys to commands which are typically run on the commandline? For example, I can open up Terminal.app and run the command /System/Library/Frameworks/ScreenSaver.framework/Resources/ScreenSaverEngine.app/Contents/MacOS/ScreenSaverEngine which will activate the screensaver and lock my screen. What if I want to bind 'Apple-key L' to this command and execute this globally, regardless of which application is in use at the moment? Can I do this, or can I only run ScreenSaverEngine from a Terminal window? I tried to set up global keyboard shortcuts, but it seems that this won't allow me to bind a key to an arbitrary shell command: Note: You can create keyboard shortcuts only for existing menu commands. You cannot define keyboard shortcuts for general purpose tasks such as opening an application or switching between applications. I tried to set up a application keyboard shortcut, but commands like ScreenSaverEngine don't seem to be an application. Note that this Screensaver/Lock screen is just one example. I have come across other nifty commands which I might want to bind to a key-combination as well. I can do this in Gnome and Windows (with varying success). How about with Leopard?

    Read the article

< Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >