Search Results

Search found 50527 results on 2022 pages for 'http expires'.

Page 341/2022 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • You couldn't write it - Expired SA account

    - by GrumpyOldDBA
    This is the stuff of DBA nightmares ! email trail: Q. Can you reset the SA account on server XXXXX, we think it has expired and now no-one can work. Connect to Server: Surely no-one would set up a Server with an sa account which expires? Thankfully not. Find sa password and change connection to use SA account. Connect without issue. Me. Have checked Server and account is fine. A. Thanks that's great, you've fixed it we can all work now....(read more)

    Read the article

  • Java Mission Control for SE Embedded 8

    - by kshimizu-Oracle
    ????????????Java???·????????????Java Mission Control????Java SE 8 Embedded???????????Java????????????????Java Mission Control?????????JVM?Java????????? CPU?????????? ???????? ?????????? ???????UI???????????????? ????????????????????????????????????????????????????????????(Java Mission Control????????????????????????????????) 1. Java Mission Control??????? Java?????????????? JMX?????(MBean????) ? Java SE Embedded 8?Compact 3?Full JRE?????(???Minimal?VM??????) ????·???? ? Java SE Embedded 8?Full JRE??????(???Minimal?VM??????) ? ???????Java ME 8??????????????? 2. ???????JVM?????     2.1. JMX?????(MBeans???)????? >java -Dcom.sun.management.jmxremote=true               -Dcom.sun.management.jmxremote.port=7091                # ????????              -Dcom.sun.management.jmxremote.authenticate=false   # ????              -Dcom.sun.management.jmxremote.ssl=false                  # SSL??              -jar appliation.jar ? ??????????????????????JVM??????????????????? "-Djava.rmi.server.hostname=192.168.0.20"                     # ?????????IP????/???? ???????????(http://docs.oracle.com/javase/7/docs/technotes/guides/management/faq.html)?5???????????????????????     2.2. ????·????????? JVM????????????????????? "-XX:+UnlockCommercialFeatures -XX:+FlightRecorder" 3. Java Mission Control?????? JDK????????jmc??????????? >"JDK_HOME"/bin/jmc 4. Java Mission Control??JVM??????  Java Mission Control?????????????????????????????????????? - ????????????IP????·??????????????????JVM????????????????????? - ??????????(????·?????)?????????? - ??????????OK??? ????????????????????????????????????????????????????????????Java?????Java Mission Control???????? ??URL) http://www.oracle.com/technetwork/jp/java/javaseproducts/mission-control/index.html http://www.oracle.com/technetwork/jp/java/javaseproducts-old/mission-control/java-mission-control-wp-2008279-ja.pdf http://www.oracle.com/technetwork/java/embedded/resources/tech/java-flight-rec-on-java-se-emb-8-2158734.html

    Read the article

  • What's wrong with this vcl config for varnish-cache as load balancer?

    - by dabito
    I have the current configurations active on my default.vcl varnish file on the machine that balances the load for other two machines (the other two machines also have varnish active). My intention is to have this server do only the load balancing and the other machines do the processing and also their own caching. My problem is that even with the config testing (not even a stress test or anything, just a few requests a minute) I get the guru meditation error and have to restart varnish. This is the default.vcl for the load balancing server: backend vader { .host = "app1.server.com"; .probe = { .url = "/"; .interval = 10s; .timeout = 4s; .window = 5; .threshold = 3; } } backend malgus { .host = "app2.server.com"; .probe = { .url = "/"; .interval = 10s; .timeout = 4s; .window = 5; .threshold = 3; } } director dooku round-robin { { .backend = vader; } { .backend = malgus; } } sub vcl_recv { if (req.http.host ~ "^balancer.server.com$") { set req.backend = dooku; } } Am I doing something wrong or missing something? EDIT: This is varnishlog's output: 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345839995 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345839998 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345840001 1.0 0 Backend_health - malgus Still sick 4--X--- 0 3 5 0.000000 3.846876 0 Backend_health - vader Still sick 4--X--- 0 3 5 0.000000 3.839194 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345840004 1.0 14 SessionOpen c 10.150.7.151 38272 :80 14 ReqStart c 10.150.7.151 38272 458200540 14 RxRequest c GET 14 RxURL c / 14 RxProtocol c HTTP/1.1 14 RxHeader c Host: dooku-dev.excelsior.com 14 RxHeader c Connection: keep-alive 14 RxHeader c User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 14 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 14 RxHeader c Accept-Encoding: gzip,deflate,sdch 14 RxHeader c Accept-Language: en-US,en;q=0.8,es-419;q=0.6,es;q=0.4 14 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 14 RxHeader c Cookie: SESSa87d6c6da0c61037a9169122dc5e4a19=HR_0Srhgc-uDArT3aJFzOBy31FtzneTXg38byr1eGMU; __atuvc=4%7C33 14 VCL_call c recv pass 14 VCL_call c hash 14 Hash c / 14 Hash c dooku-dev.excelsior.com 14 VCL_return c hash 14 VCL_call c pass pass 14 FetchError c no backend connection 14 VCL_call c error deliver 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 503 14 TxResponse c Service Unavailable 14 TxHeader c Server: Varnish 14 TxHeader c Content-Type: text/html; charset=utf-8 14 TxHeader c Retry-After: 5 14 TxHeader c Content-Length: 418 14 TxHeader c Accept-Ranges: bytes 14 TxHeader c Date: Fri, 24 Aug 2012 20:26:44 GMT 14 TxHeader c X-Varnish: 458200540 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 Length c 418 14 ReqEnd c 458200540 1345840004.916415691 1345840004.965190172 0.020933390 0.048741817 0.000032663 14 SessionClose c error 14 StatSess c 10.150.7.151 38272 0 1 1 0 1 0 256 418 14 SessionOpen c 10.150.7.151 38273 :80 14 ReqStart c 10.150.7.151 38273 458200541 14 RxRequest c GET 14 RxURL c /favicon.ico 14 RxProtocol c HTTP/1.1 14 RxHeader c Host: dooku-dev.excelsior.com 14 RxHeader c Connection: keep-alive 14 RxHeader c Accept: */* 14 RxHeader c User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 14 RxHeader c Accept-Encoding: gzip,deflate,sdch 14 RxHeader c Accept-Language: en-US,en;q=0.8,es-419;q=0.6,es;q=0.4 14 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 14 RxHeader c Cookie: SESSa87d6c6da0c61037a9169122dc5e4a19=HR_0Srhgc-uDArT3aJFzOBy31FtzneTXg38byr1eGMU; __atuvc=4%7C33 14 VCL_call c recv pass 14 VCL_call c hash 14 Hash c /favicon.ico 14 Hash c dooku-dev.excelsior.com 14 VCL_return c hash 14 VCL_call c pass pass 14 FetchError c no backend connection 14 VCL_call c error deliver 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 503 14 TxResponse c Service Unavailable 14 TxHeader c Server: Varnish 14 TxHeader c Content-Type: text/html; charset=utf-8 14 TxHeader c Retry-After: 5 14 TxHeader c Content-Length: 418 14 TxHeader c Accept-Ranges: bytes 14 TxHeader c Date: Fri, 24 Aug 2012 20:26:45 GMT 14 TxHeader c X-Varnish: 458200541 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 Length c 418 14 ReqEnd c 458200541 1345840005.226389885 1345840005.226457834 0.000026941 0.000043154 0.000024796 14 SessionClose c error 14 StatSess c 10.150.7.151 38273 0 1 1 0 1 0 256 418

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Apache config that uses two document roots based on whether the requested resource exists in the first [closed]

    - by mattalexx
    Background I have a client site that consists of a CakePHP installation and a Magento installation: /web/example.com/ /web/example.com/app/ <== CakePHP /web/example.com/app/webroot/ <== DocumentRoot /web/example.com/app/webroot/store/ <== Magento /web/example.com/config/ <== Site-wide config /web/example.com/vendors/ <== Site-wide libraries The server runs Apache 2.2.3. The problem The whole company has FTP access and got used to clogging up the /web/example.com/, /web/example.com/app/webroot/, and /web/example.com/app/webroot/store/ directories with their own files. Sometimes these files need HTTP access and sometimes they don't. In any case, this mess makes my job harder when it comes to maintaining the site. Code merges, tarring the live code, etc, is very complicated and usually requires a bunch of filters. Abandoned solution At first, I thought I would set up a new subdomain on the same server, move all of their files there, and change their FTP chroot. But that wouldn't work for these reasons: Firstly, I have no idea (and neither do they remember) what marketing materials they've sent out that contain URLs to certain resources they've uploaded to the server, using the main domain, and also using abstract subdomains that use the main virtual host because it has ServerAlias *.example.com. So suddenly having them only use static.example.com isn't feasible. Secondly, The PHP scripts in their projects are potentially very non-portable. I want their files to stay in as similar an environment as they were built as I can. Also, I do not want to debug their code to make it portable. Half-baked solution After some thought, I decided to find a way to section off the actual website files into another directory that they would not touch. The company's uploaded files would stay where they were. This would ensure that I didn't break any of their projects that needed HTTP access. It would look something like this: /web/example.com/ <== A bunch of their files are in here /web/example.com/app/webroot/ <== 1st DocumentRoot; A bunch of their files are in here /web/example.com/app/webroot/store/ <== Some more are in here /web/example.com/site/ <== New dir; Contains only site files /web/example.com/site/app/ <== CakePHP /web/example.com/site/app/webroot/ <== 2nd DocumentRoot /web/example.com/site/app/webroot/store/ <== Magento /web/example.com/site/config/ <== Site-wide config /web/example.com/site/vendors/ <== Site-wide libraries After I made this change, I would not need to pay attention to anything except for the stuff within /web/example.com/site/ and my job would be a lot easier. I would be the only one changing stuff in there. So here's where the Apache magic would happen: I need an HTTP request to http://www.example.com/ to first use /web/example.com/app/webroot/ as the document root. If nothing is found (no miscellaneous uploaded company projects are found), try finding something within /web/example.com/site/app/webroot/. Another thing to keep in mind is, the site might have some problems if the $_SERVER['DOCUMENT_ROOT'] variable reads /web/example.com/app/webroot/ but the actual files are within /web/example.com/site/app/webroot/. It would be better if the DOCUMENT_ROOT environment variable could be /web/example.com/site/app/webroot/ for anything within the /web/example.com/site/app/webroot/ directory. Conclusion Is my half-baked solution possible with Apache 2.2.3? Is there a better way to solve this problem?

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Is there a difference between multi-tasking and time-sharing?

    - by Dummy Derp
    Just going over my school notes, my teacher identifies multi-tasking OS, and time-sharing OS as two different things. I really don't see a difference between the two. MULTI-TASKING: You load a number of programs in the memory and execute them. You execute another program if the time quantum allocated to the current program expires OR if it goes on to do I/O and leaves the CPU OR if it finishes execution. TIME-SHARING: the same,again. The same applies in case of serial processing and batch processing. Although they are the same, I guess the only difference would be the way in which control information is passed to the CPU. Maybe, and again MAYBE, in serial processing you need to provide the punch cards with all the processes while in batch, the entire batch uses the same set of control information. Like all the print jobs would have the same control information.

    Read the article

  • The Growing Importance of Network Virtualization

    - by user12608550
    The Growing Importance of Network Virtualization We often focus on server virtualization when we discuss cloud computing, but just as often we neglect to consider some of the critical implications of that technology. The ability to create virtual environments (or VEs [1]) means that we can create, destroy, activate and deactivate, and more importantly, MOVE them around within the cloud infrastructure. This elasticity and mobility has profound implications for how network services are defined, managed, and used to provide cloud services. It's not just servers that benefit from virtualization, it's the network as well. Network virtualization is becoming a hot topic, and not just for discussion but for companies like Oracle and others who have recently acquired net virtualization companies [2,3]. But even before this topic became so prominent, Solaris engineers were working on technologies in Solaris 11 to virtualize network services, known as Project Crossbow [4]. And why is network virtualization so important? Because old assumptions about network devices, topology, and management must be re-examined in light of the self-service, elasticity, and resource sharing requirements of cloud computing infrastructures. Static, hierarchical network designs, and inter-system traffic flows, need to be reconsidered and quite likely re-architected to take advantage of new features like virtual NICs and switches, bandwidth control, load balancing, and traffic isolation. For example, traditional multi-tier Web services (Web server, App server, DB server) that share net traffic over Ethernet wires can now be virtualized and hosted on shared-resource systems that communicate within a larger server at system bus speeds, increasing performance and reducing wired network traffic. And virtualized traffic flows can be monitored and adjusted as needed to optimize network performance for dynamically changing cloud workloads. Additionally, as VEs come and go and move around in the cloud, static network configuration methods cannot easily accommodate the routing and addressing flexibility that VE mobility implies; virtualizing the network itself is a requirement. Oracle Solaris 11 [5] includes key network virtualization technologies needed to implement cloud computing infrastructures. It includes features for the creation and management of virtual NICs and switches, and for the allocation and control of the traffic flows among VEs [6]. Additionally it allows for both sharing and dedication of hardware components to network tasks, such as allocating specific CPUs and vNICs to VEs, and even protocol-specific management of traffic. So, have a look at your current network topology and management practices in view of evolving cloud computing technologies. And don't simply duplicate the physical architecture of servers and connections in a virtualized environment…rethink the traffic flows among VEs and how they can be optimized using Oracle Solaris 11 and other Oracle products and services. [1] I use the term "virtual environment" or VE here instead of the more commonly used "virtual machine" or VM, because not all virtualized operating system environments are full OS kernels under the control of a hypervisor…in other words, not all VEs are VMs. In particular, VEs include Oracle Solaris zones, as well as SPARC VMs (previously called LDoms), and x86-based Solaris and Linux VMs running under hypervisors such as OEL, Xen, KVM, or VMware. [2] Oracle follows VMware into network virtualization space with Xsigo purchase; http://www.mercurynews.com/business/ci_21191001/oracle-follows-vmware-into-network-virtualization-space-xsigo [3] Oracle Buys Xsigo; http://www.oracle.com/us/corporate/press/1721421 [4] Oracle Solaris 11 Networking Virtualization Technology, http://www.oracle.com/technetwork/server-storage/solaris11/technologies/networkvirtualization-312278.html [5] Oracle Solaris 11; http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html [6] For example, the Solaris 11 'dladm' command can be used to limit the bandwidth of a virtual NIC, as follows: dladm create-vnic -l net0 -p maxbw=100M vnic0

    Read the article

  • Premier Support for Hyperion Enterprise Performance Management System 11.1.1.x Ends July 2013

    - by inowodwo
    Premier Support for Oracle Hyperion Enterprise Performance Management System release 11.1.1.x expires July 2013. After July 2013, Sustaining Support will continue to be provided in accordance with Oracle's Lifetime Support Policy. Customers must follow a supported upgrade path.If your deployment is at EPM System release 11.1.1.4- Your supported upgrade path is directly to release 11.1.2.2.If your deployment is at EPM System Release 11.1.1.3 release- Your supported upgrade path is directly to release 11.1.2.2.If your deployment is at a prior 11.1.1.3 release- The recommended path is to upgrade to release 11.1.1.3. From here, you can continue a direct upgrade to release 11.1.2.2. For more information see Doc ID 1511588.1 or the Oracle Lifetime Support Policy

    Read the article

  • My self-generated CA is nearing it's end-of-life; what are the best practices for CA-rollover?

    - by Alphager
    Some buddies and me banded together to rent a small server to use for email, web-hosting and jabber. Early on we decided to generate our own Certificate Authority(CA) and sign all our certificates with that CA. It worked great! However, the original CA-cert is nearing it's end-of-life (it expires in five months). Obviously, we will have to generate a new cert and install it on all our computers. Are there any best practices we should follow? We have to re-generate all certs and sign them with the new CA, right?

    Read the article

  • JTF Tranlsation Festival 2011

    - by user13133135
    ?????????????????????? (MT) ??????????????????????? JTF ????????????????????????????????????????????????? ???5??!???21?JTF???????? ? ??:2011?11?29?(?)9:30~20:30(??9:00) ? ??:??????????(????)?(??) ? ??:(?)?????? ??:JTF?????????? ? http://www.jtf.jp/jp/festival/festival_top.html ????????????????????????????????MT ????????????????????????????????????????????????? 90 ???!??(!?)?????????????????????????????????????????????????????????????????????????????? ????????????????? http://www.jtf.jp/jp/festival/festival_program.html#koen_04 ?????????????????????????????? English:  It's been a while since the last post... I have been working on machine translation (MT) and post editing (PE) for Japanese.  Last year was my first step in MT+PE area, and I would take this year as an advanced step.  I plan to talk over Post editing 2011 (Advanced Step) on November 27 at JTF Translation Festival.  ?5 days before application due? 21st JTF Translation Festival ? Date:Nov 29, 2011 Tuesday 9:30~20:30(Gate open: 9:00) ? Place:Arcadia Ichigaya Tokyo ? http://www.jtf.jp/jp/festival/festival_top.html In this session, I would like to expand the thought on "how to best utilize MT and PE" either from the view of Client and Translator.  I will show some examples of post editing as a guideline to know what is the best way and most effective way to do post-edit for Japanese.  Also, I will discuss what is the best practice for MT users (Client). The session lasts 90 minutes... sound a little long for me, but I want to spend more time for discussion than last year.  It would be great to exchange thought or experiences about MT and PE.  What is your concerns or problems in the daily work with MT ?  If you have some, please bring them to my session at JTF Translation Festival.  Here is my session details (Japanese): http://www.jtf.jp/jp/festival/festival_program.html#koen_04 Here is the outline of my session: What is the advantage of MT ? Does it solve all the problems about cost, resource, and quality ?  Well, it is not a magic.  So, you cannot expect all at once.  When you have a problem, there are 3 options... 1. Be patient and wait until everything is ready, 2. Run a workaround using anything available now, 3. Find out something completely new and spend time and money. This time, I will focus Option 2 - do something with what we already have.  That is, I will discuss how we can best utilize MT in our daily business.  My view is two ways: From Client point of view, and From Translator point of view Looking forward to meeting many people and exchanging thoughts and information!

    Read the article

  • MySQL Connector/Net 6.6.2 has been released

    - by fernando
    MySQL Connector/Net 6.6.2, a new version of the all-managed .NET driver for MySQL has been released.  This is the first of two beta releases intended to introduce users to the new features in the release.  This release is feature complete it should be stable enough for users to understand the new features and how we expect them to work.  As is the case with all non-GA releases, it should not be used in any production environment.  It is appropriate for use with MySQL server versions 5.0-5.6 It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.6 version of MySQL Connector/Net brings the following new features:   * Stored routine debugging   * Entity Framework 4.3 Code First support   * Pluggable authentication (now third parties can plug new authentications mechanisms into the driver).   * Full Visual Studio 2012 support: everything from Server Explorer to Intellisense & the Stored Routine debugger. Stored Procedure Debugging ------------------------------------------- We are very excited to introduce stored procedure debugging into our Visual Studio integration.  It works in a very intuitive manner by simply clicking 'Debug Routine' from Server Explorer. You can debug stored routines, functions & triggers. Some of the new features in this release include:   * Besides normal breakpoints, you can define conditional & pass count breakpoints.   * Now the debugger editor shows colorizing.   * Now you can change the values of locals in a function scope (previously caused deadlock due to functions executing within their own transaction).   * Now you can also debug triggers for 'replace' sql statements.   * In general anything related to locals, watches, breakpoints, stepping & call stack should work in a similar way to the C#'s Visual Studio debugger. Some limitations remains, due to the current debugger architecture:   * Some MySQL functions cannot be debugged currently (get_lock, release_lock, begin, commit, rollback, set transaction level)..   * Only one debug session may be active on a given server. The Debugger is feature complete at this point. We look forward to your feedback. Documentation ------------------------------------- The documentation is still being developed and will be readily available soon (before Beta 2).  You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support! 

    Read the article

  • I have a NGINX server configured to work with node.js, but many times a file of 1.03MB of js is not loaded by various browser and various pc

    - by Totty
    I'm using this in a local LAN so it should be quite fast. The nginx server use the node.js server to serve static files, so it must pass throught node.js to download the files, but that is not a problem when I'm not using the nginx. In chrome with debugger on I can see that the status is: 206 - partial content and it only has downloaded 31KB of 1.03MB. After 1.1 min it turns red and the status failed. Waiting time: 6ms Receiving: 1.1 min The headers in google chrom: Request URL:http://192.168.1.16/production/assembly/script/production.js Request Method:GET Status Code:206 Partial Content Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:pt-PT,pt;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:connect.sid=s%3Abls2qobcCaJ%2FyBNZwedtDR9N.0vD4Fi03H1bEdCszGsxIjjK0lZIjJhLnToWKFVxZOiE Host:192.168.1.16 If-Range:"1081715-1350053827000" Range:bytes=16090-16090 Referer:http://192.168.1.16/production/assembly/ User-Agent:Mozilla/5.0 (Windows NT 6.0) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Response Headersview source Accept-Ranges:bytes Cache-Control:public, max-age=0 Connection:keep-alive Content-Length:1 Content-Range:bytes 16090-16090/1081715 Content-Type:application/javascript Date:Mon, 15 Oct 2012 09:18:50 GMT ETag:"1081715-1350053827000" Last-Modified:Fri, 12 Oct 2012 14:57:07 GMT Server:nginx/1.1.19 X-Powered-By:Express My nginx configurations: File 1: user totty; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /home/totty/web/production01_server/node_modules/production/_logs/_NGINX_access.txt; error_log /home/totty/web/production01_server/node_modules/production/_logs/_NGINX_error.txt; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## autoindex on; include /home/totty/web/production01_server/_deployment/nginxConfigs/server/*; } File that is included by the previous file: server { # custom location for entry # using only "/" instead of "/production/assembly" it # would allow you to go to "thatip/". In this way # we are limiting to "thatip/production/assembly/" location /production/assembly/ { # ip and port used in node.js proxy_pass http://127.0.0.1:3000/; } location /production/assembly.mongo/ { proxy_pass http://127.0.0.1:9000/; proxy_redirect off; } location /production/assembly.logs/ { autoindex on; alias /home/totty/web/production01_server/node_modules/production/_logs/; } }

    Read the article

  • SEO: best way to deal with short lifetime URLs?

    - by Mike Norgate
    I am currently in the process of redesigning a job advert site and am trying to put a lot more effort into my SEO. My question is how should I deal with the URLs that point to job adverts when the advert expires. The options I have thought of so far are: Return a 404 error and redirect to a 404 page. Will it have an effect on ranking if there are a lot of URLs that return 404s after only being up for a few weeks? Redirect to job listing page - When the user requests a URL for an advert that has expired just redirect to the main job listing page. Show the advert but tell the user to has closed - Show the advert page but with a notification that the advert has closed. The issue I see with this is that the user will visit the page, see its closed and then leave the site again which would not be good for rankings

    Read the article

  • Weird Apache Access Logs

    - by user38480
    I see repeated requests like these in my Apache Access Logs and they have been eating up all my CPU. I have a normal WordPress installation. All i changed in the Apache Configuration was changing the DocumentRoot from /var/www/html to /var/www for both ssl and the default configuration. Also, the file referenced in the requests(updatedll.jpeg) does not exist on my server and also isn't referenced in the source code served by any page of the web application. Could this be a security threat? What are these actually and what can i do to stop them. I changed the ip address of my server. They still kept coming. Meaning that somebody is actually hitting the domain name and not the ip address. Why does my server send a 301 for these requests? Shouldn't it be sending a 404? Is it because Wordpress is installed in my root directory and the .htaccess file present for Wordpress is sending a 301 redirect? My disk access logs also seem to have high peaks intermittently. But nobody is actually accessing the site. I see no access logs except these below. Also, i see that all the requests seem to be coming from one of the following 5 ip addresses. 201.4.132.43 - - [05/Jun/2014:07:35:08 -0400] "GET /updatedll.jpg HTTP/1.1" 301 465 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/4.0; BTRS103681; GTB7.5; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; InfoPath.2; OfficeLiveConnector.1.3; OfficeLivePatch.0.0; AskTbATU3/5.15.29.67612; BRI/2)" 187.40.241.48 - - [05/Jun/2014:07:35:08 -0400] "GET /updatedll.jpg HTTP/1.1" 301 465 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; GTB7.5; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)" 186.56.134.132 - - [05/Jun/2014:07:35:10 -0400] "GET /updatedll.jpg HTTP/1.0" 301 428 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)" 71.223.252.14 - - [05/Jun/2014:07:35:13 -0400] "GET /updatedll.jpg HTTP/1.1" 301 465 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; BTRS31756; GTB7.5; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; .NET4.0E; InfoPath.2)" 85.245.229.167 - - [05/Jun/2014:07:35:14 -0400] "GET /updatedll.jpg HTTP/1.1" 301 465 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MAAU; .NET4.0C; BRI/2; .NET4.0E; MAAU)"

    Read the article

  • Help on PHP CURL script [closed]

    - by Sumeet Jain
    This script uses a cookie.txt in the same folder chmoded to 777... The problem i am facing is i hav many accounts to login... Say if i hav 5 accounts...i created cookie1.txt,cookie2.txt an so on.. then the script worked..with the post data But i want this to be always logged in and post data.. Can anyone tell me how to do this????? Code which works for login and post data is http://pastebin.com/zn3gfdF2 Code which i require should be something like this ( i tried with using the same cookie.txt but i guess it expires :( ) http://pastebin.com/45bRENLN Please help me with dealing with cookies... Or suggest how to modify the code without using cookie files...

    Read the article

  • Apache FilesMatch regexp: Can it match by the cache buster 10 digit (rails generated) following the filename?

    - by ynkr
    According to the apache FilesMatch docs: The FilesMatch directive provides for access control by filename Basically, I only want to set an expires header for resources that have a 10 digit "cache buster" id appended to the name. So, here is my attempt at such a thing in my httpd.conf <FilesMatch "(jpg|jpeg|png|gif|js|css)\?\d{10}$"> ExpiresActive On ExpiresDefault "now plus 5 minutes" </FilesMatch> And here is an example of a resource I want to match: http://localhost:3000/images/of/elvis/eating-a-bacon-sandwich.png?1306277384 Now obviously my FilesMatch regexp is not matching so I am guessing 1 of 2 things is happening. Either my regexp is wonky or the '?1231231231' cache busting part of the file is not part of what apache considers part of the filename. Can anybody confirm and/or give me a way to cache only those resources that will not persist beyond the next deploy?

    Read the article

  • Site returning 404 header to google, not sure why

    - by Damon
    A Drupal site that works fine for regular users returns a 404 not found error when I try to use the W3C validator on it; it is also not being indexed by google at all (which is the main issue but I suspect there is a connection). It is a https:// site with .htaccess rule to redirect any http:// request to the https://. I had had it running in google webmaster tools and thought it was fine, but it turns out I had not added the https domain. After adding the https domain it's also returning the header as HTTP/1.1 404 Not Found Date: Mon, 15 Oct 2012 19:37:43 GMT Server: Apache Expires: Sun, 19 Nov 1978 05:00:00 GMT Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0 Robots.txt just has User-agent: * Crawl-delay: 10 # Files Disallow: /cron.php How can I check what the issue is here?

    Read the article

  • I was going to get an iPhone, BUT ..

    My cell phone contract expires in a couple weeks and I was all set to buy an iPhone. The iPhone had started to take off when I got my current phone / contract but at that the time Microsoft paid for a significant portion of my phone bill so staying with a Windows powered phone was appropriate (I like to be a good team player.)  Budget tightening as changed the expense policy and now Microsofts contribution to my cell phone expenses is limited to $35. So, I figured Id get an iPhone, its...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Missing Edit Option on Silverlight 4 DataForm

    - by rip
    I’m trying out the Silverlight 4 beta DataForm control. I don’t seem to be able to get the edit and paging options at the top of the control like I’ve seen in Silverlight 3 examples. Has something changed or am I doing something wrong? Here’s my code: <UserControl x:Class="SilverlightApplication7.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400" xmlns:dataFormToolkit="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data.DataForm.Toolkit"> <Grid x:Name="LayoutRoot" Background="White"> <dataFormToolkit:DataForm HorizontalAlignment="Left" Margin="10" Name="myDataForm" VerticalAlignment="Top" /> </Grid> </UserControl> public partial class MainPage : UserControl { public MainPage() { InitializeComponent(); this.Loaded += new RoutedEventHandler(MainPage_Loaded); } void MainPage_Loaded(object sender, RoutedEventArgs e) { Movie movie = new Movie(); myDataForm.CurrentItem = movie; } public enum Genres { Comedy, Fantasy, Drama, Thriller } public class Movie { public int MovieID { get; set; } public string Name { get; set; } public int Year { get; set; } public DateTime AddedOn { get; set; } public string Producer { get; set; } public Genres Genre { get; set; } } }

    Read the article

  • Avoid richfaces to send back javascript libraries in the ajax responses

    - by pakore
    I'm using JSF 1.2 with Richfaces, and for every ajax request, the server is sending back the response, whichi is good, but it also contains all the links to the javascript files. I want to improve the performance so I just want the <body> to be returned, because all the javascript files are already loaded in the browser when the user logs in (my app is not restful). How can i do that? Thanks This is an example of a response to reRender an image when clicking a button. <?xml version="1.0"?> <html lang="nl_NL" xmlns="http://www.w3.org/1999/xhtml"><head><title></title><link class="component" href="/eyeprevent/a4j/s/3_3_3.Finalorg/richfaces/renderkit/html/css/basic_both.xcss/DATB/eAF7sqpgb-jyGdIAFrMEaw__.xhtml" rel="stylesheet" type="text/css" /><link class="component" href="/eyeprevent/a4j/s/3_3_3.Finalorg/richfaces/renderkit/html/css/extended_both.xcss/DATB/eAF7sqpgb-jyGdIAFrMEaw__.xhtml" media="rich-extended-skinning" rel="stylesheet" type="text/css" /><link class="component" href="/eyeprevent/a4j/s/3_3_3.Finalcss/page.xcss/DATB/eAF7sqpgb-jyGdIAFrMEaw__.xhtml" rel="stylesheet" type="text/css" /><script src="/eyeprevent/a4j/g/3_3_3.Finalorg.ajax4jsf.javascript.PrototypeScript.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg.ajax4jsf.javascript.AjaxScript.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg.ajax4jsf.javascript.ImageCacheScript.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/browser_info.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/ajax4jsf/javascript/scripts/form.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalscripts/tabPanel.js.xhtml" type="text/javascript"> </script><link class="component" href="/eyeprevent/a4j/s/3_3_3.Finalcss/tabPanel.xcss/DATB/eAF7sqpgb-jyGdIAFrMEaw__.xhtml" rel="stylesheet" type="text/css" /><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/jquery/jquery.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/jquery.utils.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/json/json-mini.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg.ajax4jsf.javascript.DnDScript.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/utils.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/json/json-dom.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/dnd/dnd-common.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/dnd/dnd-draggable.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/dnd/dnd-dropzone.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/form.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/script/controlUtils.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/common-scrollable-data-table.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/extended-data-table.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/drag-indicator.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/ext-dt-drag-indicator.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/ext-dt-simple-draggable.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/ext-dt-simple-dropzone.js.xhtml" type="text/javascript"> </script><link class="component" href="/eyeprevent/a4j/s/3_3_3.Finalorg/richfaces/renderkit/html/css/dragIndicator.xcss/DATB/eAF7sqpgb-jyGdIAFrMEaw__.xhtml" rel="stylesheet" type="text/css" /><link class="component" href="/eyeprevent/a4j/s/3_3_3.Finalcss/extendedDataTable.xcss/DATB/eAF7sqpgb-jyGdIAFrMEaw__.xhtml" rel="stylesheet" type="text/css" /><script src="/eyeprevent/a4j/g/3_3_3.Finalscripts/menu.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/context-menu.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/available.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/menu.js.xhtml" type="text/javascript"> </script><link class="component" href="/eyeprevent/a4j/s/3_3_3.Finalcss/menucomponents.xcss/DATB/eAF7sqpgb-jyGdIAFrMEaw__.xhtml" rel="stylesheet" type="text/css" /><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/tooltip.js.xhtml" type="text/javascript"> </script><link class="component" href="/eyeprevent/a4j/s/3_3_3.Finalorg/richfaces/renderkit/html/css/tooltip.xcss/DATB/eAF7sqpgb-jyGdIAFrMEaw__.xhtml" rel="stylesheet" type="text/css" /><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/datascroller.js.xhtml" type="text/javascript"> </script><link class="component" href="/eyeprevent/a4j/s/3_3_3.Finalcss/datascroller.xcss/DATB/eAF7sqpgb-jyGdIAFrMEaw__.xhtml" rel="stylesheet" type="text/css" /><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/modalPanel.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/modalPanelBorders.js.xhtml" type="text/javascript"> </script><link class="component" href="/eyeprevent/a4j/s/3_3_3.Finalorg/richfaces/renderkit/html/css/modalPanel.xcss/DATB/eAF7sqpgb-jyGdIAFrMEaw__.xhtml" rel="stylesheet" type="text/css" /><script src="/eyeprevent/a4j/g/3_3_3.Finalscripts/tiny_mce/tiny_mce_src.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalscripts/editor.js.xhtml" type="text/javascript"> </script><link class="component" href="/eyeprevent/a4j/s/3_3_3.Finalcss/editor.xcss/DATB/eAF7sqpgb-jyGdIAFrMEaw__.xhtml" rel="stylesheet" type="text/css" /><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/events.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/scriptaculous/effects.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/JQuerySpinBtn.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/calendar.js.xhtml" type="text/javascript"> </script><link class="component" href="/eyeprevent/a4j/s/3_3_3.Finalorg/richfaces/renderkit/html/css/calendar.xcss/DATB/eAF7sqpgb-jyGdIAFrMEaw__.xhtml" rel="stylesheet" type="text/css" /><script src="/eyeprevent/a4j/g/3_3_3.Finalscripts/panelbar.js.xhtml" type="text/javascript"> </script><link class="component" href="/eyeprevent/a4j/s/3_3_3.Finalcss/panelbar.xcss/DATB/eAF7sqpgb-jyGdIAFrMEaw__.xhtml" rel="stylesheet" type="text/css" /><script src="/eyeprevent/a4j/g/3_3_3.Finalscripts/comboboxUtils.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalscripts/utils.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalscripts/inplaceinputstyles.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finalscripts/inplaceinput.js.xhtml" type="text/javascript"> </script><link class="component" href="/eyeprevent/a4j/s/3_3_3.Finalcss/inplaceinput.xcss/DATB/eAF7sqpgb-jyGdIAFrMEaw__.xhtml" rel="stylesheet" type="text/css" /><script src="/eyeprevent/a4j/g/3_3_3.Finalorg/richfaces/renderkit/html/scripts/skinning.js.xhtml" type="text/javascript"> </script><script src="/eyeprevent/a4j/g/3_3_3.Finaljquery.js.xhtml" type="text/javascript"> </script></head> <body> <img id="j_id305:supportImage" src="/eyeprevent/image/os-ir-central.jpg" width="50%" /> <meta name="Ajax-Update-Ids" content="j_id305:supportImage" /> <span id="ajax-view-state"><input type="hidden" name="javax.faces.ViewState" id="javax.faces.ViewState" value="j_id24" autocomplete="off" /> </span><meta id="Ajax-Response" name="Ajax-Response" content="true" /> <meta name="Ajax-Update-Ids" content="j_id305:supportImage" /> <span id="ajax-view-state"><input type="hidden" name="javax.faces.ViewState" id="javax.faces.ViewState" value="j_id24" autocomplete="off" /> </span><meta id="Ajax-Response" name="Ajax-Response" content="true" /> </body> </html> And this is the code that generated it: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:a4j="http://richfaces.org/a4j" xmlns:rich="http://richfaces.org/rich"> <ui:composition> <h:form> <h:panelGrid columns="1"> <a4j:region> <h:graphicImage id="supportImage" value="#{user.support.imagePath}" rendered="#{user.support.imageLoaded}" width="50%" /> </a4j:region> <h:panelGroup> <a4j:commandButton action="#{user.support.acceptImage}" value="YES" reRender="supportImage"/> <a4j:commandButton action="#{user.support.rejectImage}" value="NO" reRender="supportImage"/> </h:panelGroup> </h:panelGrid> </h:form> </ui:composition> </html>

    Read the article

  • How to upload a file from iPhone SDK to an ASP.NET vb.net web form using ASIFormDataRequest

    - by user289348
    Download http://allseeing-i.com/ASIHTTPRequest/. This works like a charm and is a good wrapper and handles things nicely. The make the request this way to the asp.net code listed below. Create a asp.net webpage that has a file control. IPHONE CODE: NSURL *url = [NSURL URLWithString:@"http://YourWebSite/Upload.aspx"]; ASIFormDataRequest *request = [ASIFormDataRequest requestWithURL:url]; //These two must be added. ASP.NET Looks for them, if //they are not there in the request, the file will not upload [request setPostValue:@"" forKey:@"__VIEWSTATE"]; [request setPostValue:@"" forKey:@"__EVENTVALIDATION"]; [request setFile:@"PATH_TO_Local_File_on_Iphone/file/jpg" forKey:@"fu"]; [request startSynchronous]; This is the website code <%@ Page Language="VB" AutoEventWireup="false" CodeFile="Upload.aspx.vb" Inherits="Upload" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <asp:FileUpload ID="fu" runat="server" EnableViewState="False" /> </div> <asp:Button ID="Submit" runat="server" Text="Submit" /> </form> </body> </html> //Code behind page Partial Class Upload Inherits System.Web.UI.Page Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim tMarker As New EventMarkers If fu.HasFile = True Then 'fu.PostedFile fu.SaveAs("E:\InetPub\UploadedImage\" & fu.FileName) End If End Sub End Class

    Read the article

  • WPF binding to current class property

    - by AnD
    Hello, I have a problem that i cant solve :( I have a user control (xaml file and cs file) in xaml it's like: <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" x:Class="Demo.CtrlContent" x:Name="UserControl" d:DesignWidth="598.333" d:DesignHeight="179.133" xmlns:Demo="clr-namespace:Demo" > <UserControl.Resources> <Storyboard x:Key="SBSmall"> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="border" Storyboard.TargetProperty="(FrameworkElement.Width)"> <SplineDoubleKeyFrame KeyTime="00:00:01" Value="I WANT TO BIND VALUE HERE"/> </DoubleAnimationUsingKeyFrames> </Storyboard> </UserControl.Resources> <Border BorderBrush="#FFC2C0C1" CornerRadius="3,3,3,3" BorderThickness="1,1,1,1" RenderTransformOrigin="0.5,0.5" x:Name="border" Margin="1,3,1,3" HorizontalAlignment="Left" VerticalAlignment="Top" Width="300"> and .cs file: public partial class CtrlContent { private mindef W { get { return (mindef) Window.GetWindow(this); } } public double MedWidth { // I WANT BIND THIS VALUE GO TO STORYBOARD VALUE IN XAML ABOVE get { double actualW; if(W == null) actualW = SystemParameters.PrimaryScreenWidth; else actualW = W.WrapMain.ActualWidth; return actualW - border.Margin.Left - border.Margin.Right; } } public double SmlWidth { get { return MedWidth / 2; } } public CtrlContent () { this.InitializeComponent(); } public CtrlContent (Content content) { this.InitializeComponent(); Document = content; } } in my .cs file there's a property called MedWidth, and in XAML file there's a storyboard called: SBSmall I want to bind my storyboard value to my property in class ctrlcontent. *the idea is, the storyboard is an animation to resize the control to a certain width depends on its parent container (the width is dynamic) anybody? please :) thanks!

    Read the article

  • Problems with jQuery getJSON using local files in Chrome

    - by Tauren
    I have a very simple test page that uses XHR requests with jQuery's $.getJSON and $.ajax methods. The same page works in some situations and not in others. Specificially, it doesn't work in Chrome on Ubuntu. I'm testing on Ubuntu 9.10 with Chrome 5.0.342.7 beta and Mac OSX 10.6.2 with Chrome 5.0.307.9 beta. It works correctly when files are installed on a web server from both Ubuntu/Chrome and Mac/Chrome (try it out here). It works correctly when files are installed on local hard drive in Mac/Chrome (accessed with file:///...). It FAILS when files are installed on local hard drive in Ubuntu/Chrome (access with file:///...). The small set of 3 files can be downloaded in a tar/gzip file from here: http://issues.tauren.com/testjson/testjson.tgz When it works, the Chrome console will say: XHR finished loading: "http://issues.tauren.com/testjson/data.json". index.html:16Using getJSON index.html:21 Object result: "success" __proto__: Object index.html:22success XHR finished loading: "http://issues.tauren.com/testjson/data.json". index.html:29Using ajax with json dataType index.html:34 Object result: "success" __proto__: Object index.html:35success XHR finished loading: "http://issues.tauren.com/testjson/data.json". index.html:46Using ajax with text dataType index.html:51{"result":"success"} index.html:52undefined When it doesn't work, the Chrome console will show this: index.html:16Using getJSON index.html:21null index.html:22Uncaught TypeError: Cannot read property 'result' of null index.html:29Using ajax with json dataType index.html:34null index.html:35Uncaught TypeError: Cannot read property 'result' of null index.html:46Using ajax with text dataType index.html:51 index.html:52undefined Notice that it doesn't even show the XHR requests, although the success handler is run. I swear this was working previously in Ubuntu/Chrome, and am worried something got messed up. I already uninstalled and reinstalled Chrome, but that didn't help. Can someone try it out locally on your Ubuntu system and tell me if you have any troubles? Note that it seems to be working fine in Firefox.

    Read the article

  • apache htaccess rewrite rules make redirection loop

    - by Ali
    Hi guys, Have a very strange problem with Apache .htaccess URL Rewriting and Redirection. Here's my setup: I have a zend application with a single point of entry (index.php) directly under my apache document root (call this the "public" folder). I also have all other public files (images, js, css, etc.) under the public folder. Here, I also have a wordpress blog under the "blog" folder. There's an empty test folder too The Problem When I go to mydomain.com/blog, I get redirected to http://www.theredpin.com/blog (correctly), then to http://www.theredpin.com/blog/ (just with an extra / at end), finally to http://theredpin.com/blog/ -- and we're back where we started. The loop continues. What I don't understand is why would wordpress try to remove the www? I'm guessing it's wordpress because my empty test folder acts just fine! PLEASE HELP. I"M REALLY DESPERATE :( Thank you very much Other things that happen: When I go to mydomain.com, i correctly get redirected to www.mydomain.com When I go to www.mydomain.com, i correctly stay where I am When I go to www.mydomain.com/test or mydomain.com/test, behaviour is correct. Setup So my .htaccess file does the following: If there's no www., then add it and do a 301 redirect. Here's the code I use RewriteCond %{HTTP_HOST} ^mydomain.com [NC] RewriteRule ^(.*)$ http://www.mydomain.com/$1 [L,R=301] If the request is NOT for a resource (image, etc.), or the blog, then load zend application by rewriting to index.php RewriteRule !((^blog(/)?.*$)|(.(js|ico|gif|jpg|jpeg|png|css|cur|JPG|html|txt))$) index.php Thanks again for all your help!!! Ali

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >