Search Results

Search found 20099 results on 804 pages for 'virtual host'.

Page 299/804 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • cannot boot ubuntu 13.10 with my usb, Can i change the kernal on my laptop to run it?

    - by Carlos Dunick
    Currently i am running 12.04 and looking for an upgrade to 13.10 I first tried a bootable 64bit usb and failed. With the message saying "Kernal requires an x86-64 CPU but only detected an I686 CPU Unable to boot please use a kernal appropriate for your CPU" then tried 32bit and same message came up. Is this due to my laptop simply being to slow? or can/should i change the kernal somehow? Acer Aspire 5710z Intel Pentium dual core processor, 1.73Ghz, 533 MHz FSB, 1 MB L2 cache. 2GB DDR2 lspci 00:00.0 Host bridge: Intel Corporation Mobile 945GM/PM/GMS, 943/940GML and 945GT Express Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) 00:1b.0 Audio device: Intel Corporation NM10/ICH7 Family High Definition Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 1 (rev 02) 00:1c.2 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 3 (rev 02) 00:1c.3 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 4 (rev 02) 00:1d.0 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #3 (rev 02) 00:1d.3 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #4 (rev 02) 00:1d.7 USB controller: Intel Corporation NM10/ICH7 Family USB2 EHCI Controller (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev e2) 00:1f.0 ISA bridge: Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge (rev 02) 00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801GBM/GHM (ICH7-M Family) SATA Controller [AHCI mode] (rev 02) 00:1f.3 SMBus: Intel Corporation NM10/ICH7 Family SMBus Controller (rev 02) 04:00.0 Ethernet controller: Broadcom Corporation NetLink BCM5787M Gigabit Ethernet PCI Express (rev 02) 05:00.0 Network controller: Broadcom Corporation BCM4311 802.11b/g WLAN (rev 01) 06:00.0 FLASH memory: ENE Technology Inc ENE PCI Memory Stick Card Reader Controller 06:00.1 SD Host controller: ENE Technology Inc ENE PCI SmartMedia / xD Card Reader Controller 06:00.2 FLASH memory: ENE Technology Inc Memory Stick Card Reader Controller 06:00.3 FLASH memory: ENE Technology Inc ENE PCI Secure Digital / MMC Card Reader Controller

    Read the article

  • SharePoint 2007 Enabling Incoming Email Error

    - by Cherie Riesberg
    Symptom: When configuring incoming e-mail, the e-mails come through just fine if the server name is in the e-mail address: [email protected] but when you change it to a vanity name [email protected], the message is bounced back and you get this error: Delivery has failed to these recipients or distribution lists: [email protected] Your message wasn't delivered because of security policies. Microsoft Exchange will not try to redeliver this  message for you.    Please provide the following diagnostic text to your system administrator. The following organization rejected your message: servername01.fqdn.com.   Problem: The SharePoint server relay rejects the message because it doesn't recognize the name.  You have set it up in Exchange, but you need to set up an alias in the SMTP service on the SharePoint server;   Solution: Configure an Alias Domain An alias domain is an alias of the default domain. You can set up alias domains that use the same settings as the default domain. Messages that are received by the SMTP Service for an alias domain are placed in the Drop folder that is designated for the default domain. To configure an alias domain, follow these steps: Start IIS Manager or open the IIS snap-in. Expand Server_name, where Server_name is the name of the server, and then expand the SMTP virtual server that you want (for example, Default SMTP Virtual Server). Right-click Domains, point to New, and then click Domain. The New SMTP Domain Wizard starts. Click Alias, and then click Next. Type a name for the alias domain in the Name box, and then click Finish. Quit IIS Manager or close the IIS snap-in.

    Read the article

  • LDoms with Solaris 11

    - by Orgad Kimchi
    Oracle VM Server for SPARC (LDoms) release 2.2 came out on May 24. You can get the software, see the release notes, reference manual, and admin guide here on the Oracle VM for SPARC page. Oracle VM Server for SPARC enables you to create multiple virtual systems on a single physical system.Each virtual system is called alogical domain and runs its own instance of Oracle Solaris 10 or Oracle Solaris 11. The version of the Oracle Solaris OS software that runs on a guest domain is independent of the Oracle Solaris OS version that runs on the primary domain. So, if you run the Oracle Solaris 10 OS in the primary domain, you can still run the Oracle Solaris 11 OS in a guest domain, and if you run the Oracle Solaris 11 OS in the primary domain, you can still run the Oracle Solaris 10 OS in a guest domain In addition to that starting with the Oracle VM Server for SPARC 2.2 release you can migrate guest domain even if source and target machines have different processor type. You can migrate guest domain from a system with UltraSPARC  T2+ or SPARC T3 CPU to a system with a SPARC T4 CPU.The guest domain on the source and target system must run Solaris 11 In order to enable cross CPU migration.In addition to that you need to change the cpu-arch property value on the source system. For more information about Oracle VM Server for SPARC (LDoms) with Solaris 11 and  Cross CPU Migration refer to the following white paper

    Read the article

  • Cloning from a given point in the snapshot tree

    - by Fat Bloke
    Although we have just released VirtualBox 4.3, this quick blog entry is about a longer standing ability of VirtualBox when it comes to Snapshots and Cloning, and was prompted by a question posed internally, here in Oracle: "Is there a way I can create a new VM from a point in my snapshot tree?". Here's the scenario: Let's say you have your favourite work VM which is Oracle Linux based and as you installed different packages, such as database, middleware, and the apps, you took snapshots at each point like this: But you then need to create a new VM for some other testing or to share with a colleague who will be using the same Linux and Database layers but may want to reconfigure the Middleware tier, and may want to install his own Apps. All you have to do is right click on the snapshot that you're happy with and clone: Give the VM that you are about to create a name, and if you plan to use it on the same host machine as the original VM, it's a good idea to "Reinitialize the MAC address" so there's no clash on the same network: Now choose the Clone type. If you plan to use this new VM on the same host as the original, you can use Linked Cloning else choose Full.  At this point you now have a choice about what to do about your snapshot tree. In our example, we're happy with the Linux and Database layers, but we may want to allow our colleague to change the upper tiers, with the option of reverting back to our known-good state, so we'll retain the snapshot data in the new VM from this point on: The cloning process then chugs along and may take a while if you chose a Full Clone: Finally, the newly cloned VM is ready with the subset of the Snapshot tree that we wanted to retain: Pretty powerful, and very useful.  Cheers, -FB 

    Read the article

  • 13.04 Temp Save, rt3290, Kernel downgrade

    - by user170534
    It's kind of a multiplying but I didn't wanted to open more than one topics. I'd have a fresh install of Ubuntu with tlp configured and using acpi_call to keep 7670M turned off. I was a short time arch user and with openbox and firefox it was about 60 to 70 degrees; wanted to turn to a stable release just for this reason. acpitz-virtual-0 Adapter: Virtual device temp1: +50.0°C radeon-pci-0100 Adapter: PCI adapter temp1: -128.0°C coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +56.0°C (high = +87.0°C, crit = +105.0°C) Core 0: +54.0°C (high = +87.0°C, crit = +105.0°C) Core 1: +55.0°C (high = +87.0°C, crit = +105.0°C) The temperature is not seriously high yet can be lower. Another issue is my wifi card, rt3290: The rt2800pci module is fine and all that but the download performance is pretty bad alongside of annoying signal range and the prop. driver gives a conf error about some specific rt2860 code and integrales in pci_dev file. If I blank off the error making variables, module loads but the driver is unused. Since the driver is old, I was thinking of downgrading the kernel of raring to 3.7 or may be 3.6. Should &-or can I?

    Read the article

  • Issues implementing arcball viewer

    - by Pris
    My scene has a simple cube, and a camera built with the lookAt function (I'm using OpenGL). The scene renders fine, and I'm sure I have my model/view/projection matrices set up correctly. Now I'm trying to implement arcball rotation for my camera, but I'm having some trouble. I've got it down to calculating the angle/axis rotation for a virtual sphere in normalized screen coordinates. That means when I move my mouse left to right, I get an angle around the Y axis... and moving my mouse up/down will get me an angle about X. I'm not sure where to go from here -- what do I need to do with my axis so I can apply the angle to simulate camera rotation about its viewpoint? If I try directly applying the axis/angle rotation the camera/view transform I get what you'd expect. The view is rotated about the world axes which the mouse moving over the virtual sphere on the screen corresponds to. So if I move the mouse up/down the view rotates about the world's X axis (what I get reminds me of a first-person view)... but this isn't what I want. I think I need the axis I get to be transformed so it passes through the camera viewpoint and is oriented correct in reference to the camera... but I don't know if that's right or how to do that.

    Read the article

  • Hosting woes

    Unfortunately quite a few people have noticed our recent hosting problems, but if you are reading this they should all be over, so please accept our apologies. Our former web host decided migrate to a new platform, it had all sorts or great features, but on reflection hosting wasn’t one of them. We knew it was coming, and had even been proactive and requested several dates on their migration control panel so I could be around to check it afterwards. The dates came and went without anything happening, so we sat back and carried on on for a couple of months thinking they’d get back to us when they were ready. Then out of the blue I get an email saying it has happened! Now this is what I call timing, I had client work to complete, a 50 minute presentation to write and there was a little conference called SQLBits that I help organise at the end of the week, and then our hosting provider decides to migrate our sites. Unfortunately they only migrated parts of the sites, they forgot things like the database for SQLDTS. The database eventually appeared, but the data didn’t. Then the data pitched up but without the stored procedures. I was even asked if I could perform a backup and send it to them, as they were getting timeout errors. Never mind the issues of performing a native backup on a hosted server, whilst I could have done something, the question actually left me speechless. So you cannot access your own SQL server and you expect me to be able to help? This site was there, but hadn’t been set as an IIS application so all path references were wrong which meant no CSS and all the internal navigation and links were wrong. The new improved hosting platform Control Panel didn't appear to like setting applications. It said it would, you’d have to wait 2 hours of course, then just decided not to bother after all. So needless to say after a very successful SQLBits I focused my attention on finding a new web host, and here we are again. Sorry it took so long.

    Read the article

  • The Oracle EMEA Partner Event of the Year- FREE, LIVE & ONLINE!

    - by Claudia Costa
    New products. New specializations. New opportunities. Find out how you can use them to build your Oracle business even faster and more effectively in 2010/11. The date for your diary is the 29th of June 2010, at 11:00 GMT. And this summer's event is bigger and better than ever. You will learn: What Oracle's acquisition of Sun Microsystems means for your business and your customers How Oracle Specialization can help you grow faster and smarter, and how Oracle partners from across the region are already benefitting Why Oracle's latest technology, applications, middleware and hardware products and solutions offer you unbeatable new business opportunities How Oracle's partner program is evolving to help partners succeed with a live link to the Oracle FY11 Global Partner Kickoff How specialization has helped a former Microsoft executive become one of the world's most successful social entrepreneurs You'll also have the chance to network with Oracle experts and other partners, and download valuable collateral from specially constructed virtual information booths. Plus, at the end of the event, submit your feedback form for the chance to win two passes to Oracle OpenWorld in San Francisco this September! Don't miss out! REGISTER TODAY!  for this exciting, exclusive online event. Visit here for more information and to view the complete agenda We look forward to welcoming you on the 29th of June! Yours sincerely, Stein SurlienSenior Vice President, Alliances & Channels, Oracle EMEA PS. The Oracle PartnerNetwork Days Virtual Event will be followed by "Oracle PartnerNetwork Days Executive Forums", and "Oracle PartnerNetwork Days Satellite Events" in various countries. Please look out for further communications from your local Oracle team.

    Read the article

  • "Unmet Dependencies" problem when trying apt-get install

    - by GChorn
    Anytime I try to install python packages using the command: sudo apt-get install python-package I get the following output: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: linux-headers-generic : Depends: linux-headers-3.2.0-36-generic but it is not going to be installed linux-headers-generic-pae : Depends: linux-headers-3.2.0-36-generic-pae but it is not going to be installed linux-image-generic : Depends: linux-image-3.2.0-36-generic but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). This seems to have started when these same three packages showed up in Ubuntu's Update Manager and kicked an error when I tried to install them there. Based on the suggestion in the output above, I tried running: sudo apt-get -f install But this only gave me several instances of the following error: dpkg: error processing /var/cache/apt/archives/linux-image-3.2.0-36-generic_3.2.0-36.57_i386.deb (--unpack): unable to create `/lib/modules/3.2.0-36-generic/kernel/drivers/net/wireless/ath/carl9170/carl9170.ko.dpkg-new' (while processing `./lib/modules/3.2.0-36-generic/kernel/drivers/net/wireless/ath/carl9170/carl9170.ko'): No space left on device Now maybe I'm way off-base here, but I'm wondering if the error could be coming from the "No space left on device" part? The thing is, I'm running Ubuntu as a VirtualBox VM but I've got it set to dynamically increase its virtual hard drive space as needed, so why am I still getting this error? Here's my output when I use dh -f: Filesystem Size Used Avail Use% Mounted on /dev/sda1 6.9G 5.7G 869M 88% / udev 494M 4.0K 494M 1% /dev tmpfs 201M 784K 200M 1% /run none 5.0M 0 5.0M 0% /run/lock none 501M 76K 501M 1% /run/shm VB_Shared_Folder 466G 271G 195G 59% /media/sf_VB_Shared_Folder When I perform sudo apt-get -f install and the system says, After this operation, 192 MB of additional disk space will be used. Does that mean 192 MB of my virtual machine's current memory, or 192 MB on top of the rest of my free space? As I said, my machine normally dynamically allocates additional memory from the host machine, so I don't see why there would be memory restrictions at all...

    Read the article

  • Problem installing qtbase

    - by teucer
    I am getting the following error when installing "qtbase" package for R: [ 68%] Building CXX object smoke/qt/CMakeFiles/smokeqt.dir/x_1.cpp.o /home/mroot/qtbase/kdebindings-build/smoke/qt/x_1.cpp: In static member function ‘static void __smokeqt::x_QAbstractPrintDialog::x_8(Smoke::StackItem*)’: /home/mroot/qtbase/kdebindings-build/smoke/qt/x_1.cpp:4893: error: cannot allocate an object of abstract type ‘__smokeqt::x_QAbstractPrintDialog’ /home/mroot/qtbase/kdebindings-build/smoke/qt/x_1.cpp:4834: note: because the following virtual functions are pure within ‘__smokeqt::x_QAbstractPrintDialog’: /usr/include/qt4/QtGui/qabstractprintdialog.h:89: note: virtual int QAbstractPrintDialog::exec() /home/mroot/qtbase/kdebindings-build/smoke/qt/x_1.cpp: In constructor ‘__smokeqt::x_QAbstractPrintDialog::x_QAbstractPrintDialog()’: /home/mroot/qtbase/kdebindings-build/smoke/qt/x_1.cpp:4896: error: no matching function for call to ‘QAbstractPrintDialog::QAbstractPrintDialog()’ /usr/include/qt4/QtGui/qabstractprintdialog.h:116: note: candidates are: QAbstractPrintDialog::QAbstractPrintDialog(const QAbstractPrintDialog&) /usr/include/qt4/QtGui/qabstractprintdialog.h:113: note: QAbstractPrintDialog::QAbstractPrintDialog(QAbstractPrintDialogPrivate&, QPrinter*, QWidget*) /usr/include/qt4/QtGui/qabstractprintdialog.h:86: note: QAbstractPrintDialog::QAbstractPrintDialog(QPrinter*, QWidget*) make[3]: *** [smoke/qt/CMakeFiles/smokeqt.dir/x_1.cpp.o] Error 1 make[3]: Leaving directory `/home/mroot/qtbase/kdebindings-build' make[2]: *** [smoke/qt/CMakeFiles/smokeqt.dir/all] Error 2 make[2]: Leaving directory `/home/mroot/qtbase/kdebindings-build' make[1]: *** [all] Error 2 make[1]: Leaving directory `/home/mroot/qtbase/kdebindings-build' make: *** [all] Error 2 ERROR: compilation failed for package ‘qtbase’ * removing ‘/home/mroot/R/i686-pc-linux-gnu-library/2.12/qtbase’ Any ideas?

    Read the article

  • A record not resolving

    - by user1561108
    I have a hosted domain at siteground. On this domain & host I have a subdomain with a wordpress install. I wish to move this blog to another host (hostgator), while keeping the domain with siteground. To do this I created a hosting account at hostgator, got it's ip address and set the A record in siteground's cpanel accordingly: subdomain.example.com 14400 A (ip of hostgator account) Going by this online traceroute tool the records appear to have been updated (over 4 hours ago now) as it now resolves to a theplanet.com server location which hostgator use yet the subdomain is still not resolving from a web browser. The account at hostgator has been setup and is navigable via ip address/~accountname. What's going wrong here? I should add the relevant DNS record at hostgator side looks like this: subdomain.example.com 86400 IN SOA ns483.websitewelcome.com. subdomain.example.com 86400 IN NS ns1.siteground145.com. subdomain.example.com 86400 IN NS ns2.siteground145.com. subdomain.example.com 14400 IN A 74.54.176.3 I'm not sure if the hostgator record should be classed as the SOA record but I don't know enough about it to be sure. Is this the source of the problem?

    Read the article

  • What's is the point of PImpl pattern while we can use interface for same purpose in C++?

    - by ZijingWu
    I see a lot of source code which using PIMPL idiom in C++. I assume Its purposes are hidden the private data/type/implementation, so it can resolve dependence, and then reduce compile time and header include issue. But interface class in C++ also have this capability, it can also used to hidden data/type and implementation. And to hidden let the caller just see the interface when create object, we can add an factory method in it declaration in interface header. The comparison is: Cost: The interface way cost is lower, because you doesn't even need to repeat the public wrapper function implementation void Bar::doWork() { return m_impl->doWork(); }, you just need to define the signature in the interface. Well understand: The interface technology is more well understand by every C++ developer. Performance: Interface way performance not worse than PIMPL idiom, both an extra memory access. I assume the performance is same. Following is the pseudocode code to illustrate my question: // Forward declaration can help you avoid include BarImpl header, and those included in BarImpl header. class BarImpl; class Bar { public: // public functions void doWork(); private: // You doesn't need to compile Bar.cpp after change the implementation in BarImpl.cpp BarImpl* m_impl; }; The same purpose can be implement using interface: // Bar.h class IBar { public: virtual ~IBar(){} // public functions virtual void doWork() = 0; }; // to only expose the interface instead of class name to caller IBar* createObject(); So what's the point of PIMPL?

    Read the article

  • New Online Learning Library (OLL) content

    - by Irina
    Looking to brush up on OAM or OVD skills? Want some help with OIM? Well, have you checked our Online Learning Library (OLL) recently? OLL is a great way to pickup new skills in short blocks of time, and there is an enormous selection, on a diverse set of products. Every month these trainings get hundreds or thousands of hits. It would be worth your while to spend some time just poking around the nooks and crannies for items that interest you.A smattering of new OBEs and other content have recently become available, and if you haven't already, you might want to check them out: Identity Management: Business Scenarios Business and IT – Collaborative Access Review Sign Off and Closed Loop Identity Certification Oracle Identity Governance: End to End integration From Oracle Identity Manager to a Target Webservice Oracle Identity Manager: Configuring SOA Composite Oracle Identity Manager: Web Services Connector - Overview How to do a basic Oracle Virtual Directory (OVD) Setup? How to setup a simple Oracle Virtual Directory (OVD) Join? Installing Oracle Access Manager: Identity Server and WebPass  Also new is an Oracle University 5-day class you might want to investigate: Oracle Access Manager R2: Administration Essentials An OAM Advanced Administration class is in the works and should be available late summer or fall, so keep your calendar clear! Be sure to let us know in the Comments if there is a training you would find useful. Happy Trails :)

    Read the article

  • How do I configure upstart to run a script on shutdown when the process takes longer than 10secs?

    - by Tiris
    I am running ubuntu 11.10 in a virtual machine (VirtualBox) to learn more about development in linux. I am using a git repository to save my work and have written a script to bundle my work up and save it to the shared folder for use while the virtual machine is not running. I would like to automatically run this script before shutdown so that my work is always available if the vm is off (currently I have to manually run the script). I don't know if upstart is the best way to accomplish this, but this is the config that I wrote as a test: description "test script to run at shutdown" start on runlevel [056] task script touch /media/sf_LinuxEducation/start sleep 15 touch /media/sf_LinuxEducation/start-long end script pre-start script touch /media/sf_LinuxEducation/pre-start sleep 15 touch /media/sf_LinuxEducation/pre-start-long end script post-start script touch /media/sf_LinuxEducation/post-start sleep 15 touch /media/sf_LinuxEducation/post-start-long end script pre-stop script touch /media/sf_LinuxEducation/pre-stop sleep 15 touch /media/sf_LinuxEducation/pre-stop-long end script post-stop script touch /media/sf_LinuxEducation/post-stop sleep 15 touch /media/sf_LinuxEducation/post-stop-long end script The result is that only one touch is accomplished (the first touch in pre-start). What do I need to change to see one of the touches after the sleep to work? Or Is there an easier way to get this accomplished? Thanks in advance.

    Read the article

  • Function for building an isosurface (a sphere cut by planes)

    - by GameDevEnthusiast
    I want to build an octree over a quarter of a sphere (for debugging and testing). The octree generator relies on the AIsosurface interface to compute the density and normal at any given point in space. For example, for a full sphere the corresponding code is: // returns <0 if the point is inside the solid virtual float GetDensity( float _x, float _y, float _z ) const override { Float3 P = Float3_Set( _x, _y, _z ); Float3 v = Float3_Subtract( P, m_origin ); float l = Float3_LengthSquared( v ); float d = Float_Sqrt(l) - m_radius; return d; } // estimates the gradient at the given point virtual Float3 GetNormal( float _x, float _y, float _z ) const override { Float3 P = Float3_Set( _x, _y, _z ); float d = this->AIsosurface::GetDensity( P ); float Nx = this->GetDensity( _x + 0.001f, _y, _z ) - d; float Ny = this->GetDensity( _x, _y + 0.001f, _z ) - d; float Nz = this->GetDensity( _x, _y, _z + 0.001f ) - d; Float3 N = Float3_Normalized( Float3_Set( Nx, Ny, Nz ) ); return N; } What is a nice and fast way to compute those values when the shape is bounded by a low number of half-spaces?

    Read the article

  • Oracle VM Forum????????!?????????????? "??????·????" ??????

    - by rika.tokumichi
    ??????OTN????????? 2010?5?18?????????????????Oracle VM Forum?????????? ?????????????????????????????????? ?????????????????????????????????·?????????????????????????????????????????????????????????????????????????·???????????????????????????????????????????? ??????????4?????????? ????????(?????)???????????·???????~???????????????? 5????~ ?????!????????·????????????????????? ????????????·?????????~???????????????????????~ ????????????????????~????????·????????????????~ ????????????????? ????????????????????????????????OTN?????????????????????????????????????????????????????????Fusion Middleware?????? - Fusion Middleware???????? ????????????????????????????????^^ ???????????????????????????????????? ------------------------------ ?????!????????·????????????????????? ????????????????????????????Xen???????????????? ???Oracle VM for x86?Xen???????????????Xen 4.0????????Oracle VM???????????????????????????????????????Dell PowerEdge R910(CPU 32?????? 64GB)????????? ??????????????????VM????????????????????????????????????????????·???????????????????? ??????????4GB??????·????????????????????·????????????????? ???????VM1???????????????????????VM 30????VM 60?????????????????????? ?????????????????????????????????????VM?????????????????????? ?????????????Self-Ballooning?Transcendent Memory?????????????????????????????????????????????????????????? >Xen??????????????? ????????????????????????Dell PowerEdge R810?????????????????????2U??????????????CPU 32??????????·????32??????????? ----------------------------- ????????????????????~????????·????????????????~ ???????????????????(AP)???????????????????????????????????????????????????????????????????????????????????????????????? ????????????AP????????????????????????? ?????????????????????????????????AP????·????????????????????????????????????????AP·????·????????????????????????????????????? ???????????EC???????????????????????????????????? ????????????????????2?????? ???Oracle Virtual Assembly Builder? ????(Web?AP?DB)????????????????????????????????????????????? ???Oracle WebLogic Server on JRockit Virtual Edition? ????OS?????????????????JVM?????OS????????????????????(?????????????)?????????????????????????? ----------------------------- ??????????????????? >?????????????? ?????????????(??????)?????????????????????????? ?????????????????????????! ?????????????????????????????????????????????????????????

    Read the article

  • AuthSub token from Google/YouTube API is always returned as invalid

    - by Miriam Raphael Roberts
    Anyone out there have experience with the YouTube/Google API? I am trying to login to Google/Youtube using clientLogin, retrieve an AuthSub token, exchange it for a multi-session token and then use it in our upload form. Just a note that we are not going to have other users logging into our (secure) website, this is for our use only (no multi-users). We just want a way to upload videos to our YT account via our own website without having to login/upload to YouTube. Ultimately, everything is dependent on the first step. My AuthSub token is always being returned as invalid (Error '403'). All the steps I used are below with username/password changed. Anyone have an insight on why my AuthSub is always invalid? I am spending an enormous amount of time trying to get this to work. STEP 1: Getting the authsub token from Youtube/Google POST /youtube/accounts/ClientLogin HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: www.google.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/x-www-form-urlencoded Content-Length: 86 Email=MyGoogleUsername&Passwd=MyGooglePasswd&accountType=GOOGLE&service=youtube&source=Test RESPONSE RECEIVED: Auth=AIwbFAR99f3iACfkT-5PXCB-1tN4vlyP_1CiNZ8JOn6P-......yv4d4zeGRemNm4il1e-M6czgfDXAR0w9fQ YouTubeUser=MyYouTubeUsername CURL COMMAND USED: /usr/bin/curl -S -v --location https://www.google.com/youtube/accounts/ClientLogin --data Email=MyGoogleUsername&Passwd=MyGooglePasswd&accountType=GOOGLE&service=youtube&source=Test --header Content-Type:application/x-www-form-urlencoded STEP 2: Exchanging the AuthSub token for a multi-use token GET /accounts/AuthSubSessionToken HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: www.google.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/x-www-form-urlencoded Authorization: AuthSub token="AIwbFASiRR3XDKs......p5Oy_VA_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" Response received: 403 Invalid AuthSub token. curl command used: /usr/bin/curl -S -v --location https://www.google.com/accounts/AuthSubSessionToken --header Content-Type:application/x-www-form-urlencoded -H Authorization: AuthSub token="AIwbFAQR_4xG2g.....vp3BQZW5XEMyIj_wFozHSTEQ-BQRfYuIY-1CyqLeQ" STEP 3: Checking to see if the token is good/valid GET /accounts/AuthSubTokenInfo HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: www.google.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/x-www-form-urlencoded Authorization: AuthSub token="AIwbFASiRR3XDKsNkaIoPaujN5RQhKs3u.....A_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" Received response: 403 Invalid AuthSub token. curl command used: /usr/bin/curl -S -v --location https://www.google.com/accounts/AuthSubTokenInfo --header Content-Type:application/x-www-form-urlencoded -H Authorization: AuthSub token="AIwbFAQR_4xG2gHoAKDsNdFqdZdwWjGeNquOLpvp3BQZW5XEMyIj_wFozHSTEQ-BQRfYuIY-1CyqLeQ" STEP 4: Trying to get the upload token using the authsub token POST /action/GetUploadToken HTTP/1.1 User-Agent: curl/7.10.6 (i386-redhat-linux-gnu) libcurl/7.10.6 OpenSSL/0.9.7a ipv6 zlib/1.1.4 Host: gdata.youtube.com Pragma: no-cache Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Content-Type:application/atom+xml Authorization: AuthSub token="AIwbFASiRR3XDKsNkaIoPaujN5RQhp5Oy_VA_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" X-Gdata-Key:key="AI39si5EQyo-TZPFAnmGjxJGFKpxd_7a6hEERh_3......R82AShoQ" Content-Length:0 GData-Version:2 Recevied Response: 401 Token invalid - Invalid AuthSub token. Curl command used: /usr/bin/curl -S -v --location http://gdata.youtube.com/action/GetUploadToken -H Content-Type:application/atom+xml -H Authorization: AuthSub token="AIwbFASiRR3XDKs....sYDp5Oy_VA_9U2yV1enxJoVGSgMlZqTcjKw9mS861vlc9GWTH9D9sQ" -H X-Gdata-Key:key="AI39si5EQyo-TZPFAnmGjxJGF......Kpxd6dN2J1oHFQYTj_7a6hEERh_3E48R82AShoQ" -H Content-Length:0 -H GData-Version:2

    Read the article

  • How to expose service contract interfaces with multiple inheritance in WCF service on single endpoin

    - by Vaibhav Gawali
    I have only simple data types in method signature of service (such as int, string). My service class implements single ServiceContract interface say IMathService, and this interface in turn inherits from some other base interface say IAdderService. I want to expose the MathService using interface contract IAdderService as a service on a single endpoint. However some of the clinet's which know about IMathService should be able to access the extra services provided by IMathService on that single endpoint i.e. by just typecasting IAdderService to IMathService. //Interfaces and classes at server side [ServiceContract] public interface IAdderService { [OperationContract] int Add(int num1, int num2); } [ServiceContract] public interface IMathService : IAdderService { [OperationContract] int Substract(int num1, int num2); } public class MathService : IMathService { #region IMathService Members public int Substract(int num1, int num2) { return num1 - num2; } #endregion #region IAdderService Members public int Add(int num1, int num2) { return num1 + num2; } #endregion } //Run WCF service as a singleton instace MathService mathService = new MathService(); ServiceHost host = new ServiceHost(mathService); host.Open(); Server side Configuration: <configuration> <system.serviceModel> <services> <service name="IAdderService" behaviorConfiguration="AdderServiceServiceBehavior"> <endpoint address="net.pipe://localhost/AdderService" binding="netNamedPipeBinding" bindingConfiguration="Binding1" contract="TestApp.IAdderService" /> <endpoint address="mex" binding="mexNamedPipeBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.pipe://localhost/AdderService"/> </baseAddresses> </host> </service> </services> <bindings> <netNamedPipeBinding> <binding name="Binding1" > <security mode = "None"> </security> </binding > </netNamedPipeBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="AdderServiceServiceBehavior"> <serviceMetadata /> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> Client Side imeplementation: IAdderService adderService = new ChannelFactory<IAdderService>("AdderService").CreateChannel(); int result = adderService.Add(10, 11); IMathService mathService = adderService as IMathService; result = mathService.Substract(100, 9); Client side configuration: <configuration> <system.serviceModel> <client> <endpoint name="AdderService" address="net.pipe://localhost/AdderService" binding="netNamedPipeBinding" bindingConfiguration="Binding1" contract="TestApp.IAdderService" /> </client> <bindings> <netNamedPipeBinding> <binding name="Binding1" maxBufferSize="65536" maxConnections="10"> <security mode = "None"> </security> </binding > </netNamedPipeBinding> </bindings> </system.serviceModel> </configuration> Using above code and configuration I am not able to typecast IAdderService instnace to IMathService, it fails and I get null instance of IMathService at client side. My observation is if server exposes IMathService to client then client can safely typecast to IAdderService and vice versa is also possible. However if server exposes IAdderService then the typecast fails. Is there any solution to this? or am I doing it in a wrong way.

    Read the article

  • Session Cookies and IE 8

    - by Matt Luongo
    I recently built a simple web-app deployed over Tomcat. The app uses pretty standard session based security where a user who has logged in is given a session. Sessions work fine in Firefox and Chrome, but require the use of jsessionid in the URL for IE (tested 7 & 8), set to medium privacy. In IE 8, I tried to override cookie handling, setting "Allow all 3rd party cookies" and "Allow all session cookies"- no dice. However, when I run Tomcat on my local machine, IE accepts the cookie, and sessions work just fine. And now, for the HTTP headers. From Chrome, a logged in user gets a session GET http://devl:8080/testing/ HTTP/1.1 Host: devl:8080 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1036 Safari/532.5 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 HTTP/1.1 200 OK Server: Apache-Coyote/1.1 P3P: CP="NON CURa ADMa DEVa TAIa OUR BUS IND UNI COM NAV INT STA" Set-Cookie: JSESSIONID=9280023BCE2046F32B13C89130CBC397; Path=/testing Content-Type: text/html;charset=UTF-8 Content-Language: en-US Content-Length: 2450 Date: Fri, 26 Mar 2010 14:14:40 GMT GET http://devl:8080/testing/logout HTTP/1.1 Host: devl:8080 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1036 Safari/532.5 Referer: http://devl:8080/testing/ Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Cookie: JSESSIONID=9280023BCE2046F32B13C89130CBC397 ... From IE 8, with standard medium level security and privacy- GET http://devl:8080/testing/ HTTP/1.1 Accept: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, */* Accept-Language: en-US User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Win64; x64; Trident/4.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MDDC; Tablet PC 2.0) UA-CPU: AMD64 Accept-Encoding: gzip, deflate Host: devl:8080 Connection: Keep-Alive HTTP/1.1 200 OK Server: Apache-Coyote/1.1 P3P: CP="NON CURa ADMa DEVa TAIa OUR BUS IND UNI COM NAV INT STA" Set-Cookie: JSESSIONID=192999F922D6E9C868314452726764BA; Path=/testing Content-Type: text/html;charset=UTF-8 Content-Language: en-US Content-Length: 2450 Date: Fri, 26 Mar 2010 14:32:34 GMT GET http://devl:8080/testing/logout HTTP/1.1 Accept: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, */* Referer: http://devl:8080/testing/;jsessionid=6371A83EFE39A46997544F9146AA5CEA Accept-Language: en-US User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Win64; x64; Trident/4.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MDDC; Tablet PC 2.0) UA-CPU: AMD64 Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: devl:8080 ... I thought it might be P3P, but on adding a compact policy, nothing changes. This is the standard Tomcat session, so I'm really surprised I haven't been able to find other people with the same problem so far. Anyone have any ideas?

    Read the article

  • Hosting Mercurial on IIS7

    - by Lasse V. Karlsen
    Note, this might perhaps be best suited on serverfault.com, but since it is about hosting a programmer source code repository, I am not entirely sure. I'm posting here first, trusting that it'll be migrated if necessary. I'm attempting to host clones of my Mercurial repositories on my own server (I have the main repo somewhere else), and I'm attempting to set up Mercurial under IIS. I followed the guide here, but I get an error message. Solved: See bottom of this question for details. The error message is: mercurial.error.RepoError: repository /path/to/repo/or/config not found Here's what I did. I installed Mercurial 1.5.2 I created c:\inetpub\hg I downloaded the hg source as per the instructions of the webpage, and copied the hgweb.cgi file into c:\inetpub\hg (note, the webpage says hgwebdir.cgi, but this particular file does not exist, hgweb.cgi does, however, can this be the source of the problem?) I added a hgweb.config, with the following contents: [paths] repo1 = C:/hg/** [web] style = monoblue I created c:\hg, created a sub-directory test, and created a repository inside it I installed python 2.6.5, latest 2.6 version from the website (the webpage mentions I need to install the correct version or I'll get a specific error message, since I don't get an error message that looks remotely like the one mentioned, I assume that 2.6.5 is not the problem) I added a new virtual host hg.vkarlsen.no, pointing it to c:\inetpub\hg For this host, I added a script mapping under the Handler Mappings section, mapping *.cgi to c:\python26\python.exe -u %s %s as per the instructions on the website. I then tested it by navigating to http://hg.vkarlsen.no/hgweb.cgi, but I get an error message. To make it easier to test, I dropped to a command prompt, navigated to c:\inetpub\hg, and executed the following command (error message is part of the text below): C:\inetpub\hg>c:\python26\python.exe -u hgweb.cgi Traceback (most recent call last): File "hgweb.cgi", line 16, in <module> application = hgweb(config) File "mercurial\hgweb\__init__.pyc", line 12, in hgweb File "mercurial\hgweb\hgweb_mod.pyc", line 30, in __init__ File "mercurial\hg.pyc", line 82, in repository File "mercurial\localrepo.pyc", line 2221, in instance File "mercurial\localrepo.pyc", line 62, in __init__ mercurial.error.RepoError: repository /path/to/repo/or/config not found Does anyone know what I need to look at in order to fix this? Edit: Ok, I think I managed to get one step closer to the solution, but I'm still stumped. I realized the .cgi file is a python script file, and not something compile, so I opened it for editing, and these lines was sitting in it: # Path to repo or hgweb config to serve (see 'hg help hgweb') config = "/path/to/repo/or/config" So this was my source for the specific error message. If I change the line to this: config = "c:\\hg\\test" Then I can navigate the empty repository through the Mercurial web interface. However, I want to host multiple repositories, and seeing as the line says that I can also link to a hgweb config file, I tried this: config = "c:\\inetpub\\hg\\hgweb.config" But then I get the following error message: mercurial.error.Abort: c:\inetpub\hg\hgweb.config: not a Mercurial bundle file Exception ImportError: 'No module named shutil' in <bound method bundlerepository.__del__ of <mercurial.bundlerepo.bundlerepository object at 0x0260A110>> ignored Nothing I've tried for the config variable seems to work: config = "hgweb.config" config = "c:\\hg\\hgweb.config" various other variations I don't remember. So, still stumped, pointers anyone? Solved: I ended up having to edit the hgweb.cgi file: from: from mercurial.hgweb import hgweb, wsgicgi application = hgweb(config) to: from mercurial.hgweb import hgweb, hgwebdir, wsgicgi application = hgwebdir(config) Note the added hgwebdir parts there. Here's my hgweb.config file, located in the same directory as hgweb.cgi file: [collections] C:/hg/ = C:/hg/ [web] style = gitweb This now serves my repositories successfully. Hopefully this question will give others some information if they're stumped as I was.

    Read the article

  • Single django instance with subdomains for each app in the django project

    - by jwesonga
    I have a django project (django+apache+mod_wsgi+nginx) with multiple apps, I'd like to map each app as a subdomain: project/ app1 (domain.com) app2 (sub1.domain.com) app3 (sub3.domain.com) I have a single .wsgi script serving the project, which is stored in a folder /apache. Below is my vhost file. I'm using a single vhost file instead of separate ones for each sub-domain: <VirtualHost *:8080> ServerAdmin [email protected] ServerName www.domain.com ServerAlias domain.com DocumentRoot /home/path/to/app/ Alias /admin_media/ /usr/local/lib/python2.6/dist-packages/django/contrib/admin/media <Directory /home/path/to/wsgi/apache/> Order deny,allow Allow from all </Directory> LogLevel warn ErrorLog /home/path/to/logs/apache_error.log CustomLog /home/path/to/logs/apache_access.log combined WSGIDaemonProcess domain.com user=www-data group=www-data threads=25 WSGIProcessGroup domain.com WSGIScriptAlias / /home/path/to/apache/kcdf.wsgi </VirtualHost> <VirtualHost *:8081> ServerAdmin [email protected] ServerName sub1.domain.com ServerAlias sub1.domain.com DocumentRoot /home/path/to/app Alias /admin_media/ /usr/local/lib/python2.6/dist-packages/django/contrib/admin/media <Directory /home/path/to/wsgi/apache/> Order deny,allow Allow from all </Directory> LogLevel warn ErrorLog /home/path/to/logs/apache_error.log CustomLog /home/path/to/logs/apache_access.log combined WSGIDaemonProcess sub1.domain.com user=www-data group=www-data threads=25 WSGIProcessGroup sub1.domain.com WSGIScriptAlias / /home/path/to/apache/kcdf.wsgi </VirtualHost> My Nginx configuration for the domain.com: server { listen 80; server_name domain.com; access_log off; error_log off; # proxy to Apache 2 and mod_wsgi location / { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } Configuration for the sub.domain.com: server { listen 80; server_name sub.domain.com; access_log off; error_log off; # proxy to Apache 2 and mod_wsgi location / { proxy_pass http://127.0.0.1:8081/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } This set up doesn't seem to work, everything seems to point to the main domain. I've tried http://effbot.org/zone/django-multihost.htm which kind of worked but seems to have issues with loading my css,images,js files.

    Read the article

  • Problem with URL encoded parameters in URL view helper

    - by Richard Knop
    So my problem is kinda weird, it only occurs when I test the application offline (on my PC with WampServer), the same code works 100% correctly online. Here is how I use the helper (just example): <a href="<?php echo $this->url(array('module' => 'admin', 'controller' => 'index', 'action' => 'approve-photo', 'id' => $this->escape($a->id), 'ret' => urlencode('admin/index/photos')), null, true); ?>" class="blue">approve</a> Online, this link works great, it goes to the action which looks similar to this: public function approvePhotoAction() { $request = $this->getRequest(); $photos = $this->_getTable('Photos'); $dbTrans = false; try { $db = $this->_getDb(); $dbTrans = $db->beginTransaction(); $photos->edit($request->getParam('id'), array('status' => 'approved')); $db->commit(); } catch (Exception $e) { if (true === $dbTrans) { $db->rollBack(); } } $this->_redirect(urldecode($request->getParam('ret'))); } So online, it approves the photo and redirects back to the URL that is encoded as "ret" param in the URL ("admin/index/photos"). But offline with WampServer I click on the link and get 404 error like this: Not Found The requested URL /public/admin/index/approve-photo/id/1/ret/admin/index/photos was not found on this server. What the hell? I'm using the latest version of WampServer (WampServer 2.0i [11/07/09]). Everything else works. Here is my .htaccess file, just in case: RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ /index.php [NC,L] # Turn off magic quotes #php_flag magic_quotes_gpc off I'm using virtual hosts to test ZF projects on my local PC. Here is how I add virtual hosts. httpd.conf: NameVirtualHost *:80 <VirtualHost *:80> ServerName myproject DocumentRoot "C:\wamp\www\myproject" </VirtualHost> the hosts file: 127.0.0.1 myproject Any help would be appreciated because this makes testing and debugging projects on my localhost a nightmare and almost impossible task. I have to upload everything online to check if it works :( UPDATE: Source code of the _redirect helper (built in ZF helper): /** * Set redirect in response object * * @return void */ protected function _redirect($url) { if ($this->getUseAbsoluteUri() && !preg_match('#^(https?|ftp)://#', $url)) { $host = (isset($_SERVER['HTTP_HOST'])?$_SERVER['HTTP_HOST']:''); $proto = (isset($_SERVER['HTTPS'])&&$_SERVER['HTTPS']!=="off") ? 'https' : 'http'; $port = (isset($_SERVER['SERVER_PORT'])?$_SERVER['SERVER_PORT']:80); $uri = $proto . '://' . $host; if ((('http' == $proto) && (80 != $port)) || (('https' == $proto) && (443 != $port))) { $uri .= ':' . $port; } $url = $uri . '/' . ltrim($url, '/'); } $this->_redirectUrl = $url; $this->getResponse()->setRedirect($url, $this->getCode()); } UPDATE 2: Output of the helper offline: /admin/index/approve-photo/id/1/ret/admin%252Findex%252Fphotos And online (the same): /admin/index/approve-photo/id/1/ret/admin%252Findex%252Fphotos UPDATE 3: OK. The problem was actually with the virtual host configuration. The document root was set to C:\wamp\www\myproject instead of C:\wamp\www\myproject\public. And I was using this htaccess to redirect to the public folder: RewriteEngine On php_value upload_max_filesize 15M php_value post_max_size 15M php_value max_execution_time 200 php_value max_input_time 200 # Exclude some directories from URI rewriting #RewriteRule ^(dir1|dir2|dir3) - [L] RewriteRule ^\.htaccess$ - [F] RewriteCond %{REQUEST_URI} ="" RewriteRule ^.*$ /public/index.php [NC,L] RewriteCond %{REQUEST_URI} !^/public/.*$ RewriteRule ^(.*)$ /public/$1 RewriteCond %{REQUEST_FILENAME} -f RewriteRule ^.*$ - [NC,L] RewriteRule ^public/.*$ /public/index.php [NC,L] Damn it I don't know why I forgot about this, I thought the virtual host was configured correctly to the public folder, I was 100% sure about that, I also double checked it and saw no problem (wtf? am I blind?).

    Read the article

  • Fluent NHibernate - subclasses with shared reference

    - by ollie
    Edit: changed class names. I'm using Fluent NHibernate (v 1.0.0.614) automapping on the following set of classes (where Entity is the base class provided in the S#arp Architecture framework): public class Car : Entity { public virtual int ModelYear { get; set; } public virtual Company Manufacturer { get; set; } } public class Sedan : Car { public virtual bool WonSedanOfYear { get; set; } } public class Company : Entity { public virtual IList<Sedan> Sedans { get; set; } } This results in the following Configuration (as written to hbm.xml): <class name="Company" table="Companies"> <id name="Id" type="System.Int32" unsaved-value="0"> <column name="`ID`" /> <generator class="identity" /> </id> <bag cascade="all" inverse="true" name="Sedans" mutable="true"> <key> <column name="`CompanyID`" /> </key> <one-to-many class="Sedan" /> </bag> </class> <class name="Car" table="Cars"> <id name="Id" type="System.Int32" unsaved-value="0"> <column name="`ID`" /> <generator class="identity" /> </id> <property name="ModelYear" type="System.Int32"> <column name="`ModelYear`" /> </property> <many-to-one cascade="save-update" class="Company" name="Manufacturer"> <column name="`CompanyID`" /> </many-to-one> <joined-subclass name="Sedan"> <key> <column name="`CarID`" /> </key> <property name="WonSedanOfYear" type="System.Boolean"> <column name="`WonSedanOfYear`" /> </property> </joined-subclass> </class> So far so good! But now comes the ugly part. The generated database tables: Table: Companies Columns: ID (PK, int, not null) Table: Cars Columns: ID (PK, int, not null) ModelYear (int, null) CompanyID (FK, int, null) Table: Sedan Columns: CarID (PK, FK, int, not null) WonSedanOfYear (bit, null) CompanyID (FK, int, null) Instead of one FK for Company, I get two! How can I ensure I only get one FK for Company? Override the automapping? Put a convention in place? Or is this a bug? Your thoughts are appreciated.

    Read the article

  • BasicAuthProvider in ServiceStack

    - by Per
    I've got an issue with the BasicAuthProvider in ServiceStack. POST-ing to the CredentialsAuthProvider (/auth/credentials) is working fine. The problem is that when GET-ing (in Chrome): http://foo:pwd@localhost:81/tag/string/list the following is the result Handler for Request not found: Request.HttpMethod: GET Request.HttpMethod: GET Request.PathInfo: /login Request.QueryString: System.Collections.Specialized.NameValueCollection Request.RawUrl: /login?redirect=http%3a%2f%2flocalhost%3a81%2ftag%2fstring%2flist which tells me that it redirected me to /login instead of serving the /tag/... request. Here's the entire code for my AppHost: public class AppHost : AppHostHttpListenerBase, IMessageSubscriber { private ITagProvider myTagProvider; private IMessageSender mySender; private const string UserName = "foo"; private const string Password = "pwd"; public AppHost( TagConfig config, IMessageSender sender ) : base( "BM App Host", typeof( AppHost ).Assembly ) { myTagProvider = new TagProvider( config ); mySender = sender; } public class CustomUserSession : AuthUserSession { public override void OnAuthenticated( IServiceBase authService, IAuthSession session, IOAuthTokens tokens, System.Collections.Generic.Dictionary<string, string> authInfo ) { authService.RequestContext.Get<IHttpRequest>().SaveSession( session ); } } public override void Configure( Funq.Container container ) { Plugins.Add( new MetadataFeature() ); container.Register<BeyondMeasure.WebAPI.Services.Tags.ITagProvider>( myTagProvider ); container.Register<IMessageSender>( mySender ); Plugins.Add( new AuthFeature( () => new CustomUserSession(), new AuthProvider[] { new CredentialsAuthProvider(), //HTML Form post of UserName/Password credentials new BasicAuthProvider(), //Sign-in with Basic Auth } ) ); container.Register<ICacheClient>( new MemoryCacheClient() ); var userRep = new InMemoryAuthRepository(); container.Register<IUserAuthRepository>( userRep ); string hash; string salt; new SaltedHash().GetHashAndSaltString( Password, out hash, out salt ); // Create test user userRep.CreateUserAuth( new UserAuth { Id = 1, DisplayName = "DisplayName", Email = "[email protected]", UserName = UserName, FirstName = "FirstName", LastName = "LastName", PasswordHash = hash, Salt = salt, }, Password ); } } Could someone please tell me what I'm doing wrong with either the SS configuration or how I am calling the service, i.e. why does it not accept the supplied user/pwd? Update1: Request/Response captured in Fiddler2when only BasicAuthProvider is used. No Auth header sent in the request, but also no Auth header in the response. GET /tag/string/AAA HTTP/1.1 Host: localhost:81 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8,sv;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Cookie: ss-pid=Hu2zuD/T8USgvC8FinMC9Q==; X-UAId=1; ss-id=1HTqSQI9IUqRAGxM8vKlPA== HTTP/1.1 302 Found Location: /login?redirect=http%3a%2f%2flocalhost%3a81%2ftag%2fstring%2fAAA Server: Microsoft-HTTPAPI/2.0 X-Powered-By: ServiceStack/3,926 Win32NT/.NET Date: Sat, 10 Nov 2012 22:41:51 GMT Content-Length: 0 Update2 Request/Response with HtmlRedirect = null . SS now answers with the Auth header, which Chrome then issues a second request for and authentication succeeds GET http://localhost:81/tag/string/Abc HTTP/1.1 Host: localhost:81 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8,sv;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Cookie: ss-pid=Hu2zuD/T8USgvC8FinMC9Q==; X-UAId=1; ss-id=1HTqSQI9IUqRAGxM8vKlPA== HTTP/1.1 401 Unauthorized Transfer-Encoding: chunked Server: Microsoft-HTTPAPI/2.0 X-Powered-By: ServiceStack/3,926 Win32NT/.NET WWW-Authenticate: basic realm="/auth/basic" Date: Sat, 10 Nov 2012 22:49:19 GMT 0 GET http://localhost:81/tag/string/Abc HTTP/1.1 Host: localhost:81 Connection: keep-alive Authorization: Basic Zm9vOnB3ZA== User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8,sv;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Cookie: ss-pid=Hu2zuD/T8USgvC8FinMC9Q==; X-UAId=1; ss-id=1HTqSQI9IUqRAGxM8vKlPA==

    Read the article

  • How do I solve an unresolved external when using C++ Builder packages?

    - by David M
    I'm experimenting with reconfiguring my application to make heaving use of packages. Both I and another developer running a similar experiment are running into a bit of trouble when linking using several different packages. We're probably both doing something wrong, but goodness knows what :) The situation is this: The first package, PackageA.bpl, contains C++ class FooA. The class is declared with the PACKAGE directive. The second package, PackageB.bpl, contains a class inheriting from FooA, called FooB. It includes FooB.h, and the package is built using runtime packages, and links to PackageA by adding a reference to PackageA.bpi. When building PackageB, it compiles fine but linking fails with a number of unresolved externals, the first few of which are: [ILINK32 Error] Error: Unresolved external '__tpdsc__ FooA' referenced from C:\blah\FooB.OBJ [ILINK32 Error] Error: Unresolved external 'FooA::' referenced from C:\blah\FooB.OBJ [ILINK32 Error] Error: Unresolved external '__fastcall FooA::~FooA()' referenced from blah\FooB.OBJ etc. Running TDump on PackageA.bpl shows: Exports from PackageA.bpl 14 exported name(s), 14 export addresse(s). Ordinal base is 1. Sorted by Name: RVA Ord. Hint Name -------- ---- ---- ---- 00002A0C 8 0000 __tpdsc__ FooA 00002AD8 10 0001 __linkproc__ FooA::Finalize 00002AC8 9 0002 __linkproc__ FooA::Initialize 00002E4C 12 0003 __linkproc__ PackageA::Finalize 00002E3C 11 0004 __linkproc__ PackageA::Initialize 00006510 14 0007 FooA:: 00002860 5 0008 FooA::FooA(FooA&) 000027E4 4 0009 FooA::FooA() 00002770 3 000A __fastcall FooA::~FooA() 000028DC 6 000B __fastcall FooA::Method1() const 000028F4 7 000C __fastcall FooA::Method2() const 00001375 2 000D Finalize 00001368 1 000E Initialize 0000610C 13 000F ___CPPdebugHook So the class definitely seems to be exported and available to link. I can see entries for the specific things ILink32 says it's looking for and not finding. Running TDump on the BPI file shows similar entries. Other info The class does descend from TObject, though originally before refactoring into packages it was a normal C++ class. (More detail below. It seems "safer" using VCL-style classes when trying to solve problems with a very Delphi-ish thing like this anyway. Changing this only changes the order of unresolved externals to first not find Method1 and Method2, then others.) Declaration for FooA: class PACKAGE FooA: public TObject { public: FooA(); virtual __fastcall ~FooA(); FooA(const FooA&); virtual __fastcall long Method1() const; virtual __fastcall long Method2() const; }; and FooB: class FooB: public FooA { public: FooB(); virtual __fastcall ~FooB(); ... other methods... }; All methods definitely are implemented in the .cpp files, so it's not not finding them because they don't exist! The .cpp files also contain #pragma package(smart_init) near the top, under the includes. Questions that might help... Are packages reliable using C++, or are they only useable with Delphi code? Is linking to the first package by adding a reference to its BPI correct - is that how you're supposed to do it? I could use a LIB but it seems to make the second package much larger, and I suspect it's statically linking in the contents of the first. Can we use the PACKAGE directive only on TObject-derived classes? There is no compiler warning using it on standard C++ classes. Is splitting code into packages the best way to achieve the goal of isolating code and communicating through defined layers / interfaces? I've been investigating this path because it seems to be the C++Builder / Delphi Way, and if it worked it looks attractive. But are there better alternatives? I'm very new to using packages and have only known about them through using components before. Any general words of advice would be great! We're using C++Builder 2010. I've fabricated the class and method names in the above code examples, but other than that the details are exactly what we're seeing. Cheers, David

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >