Search Results

Search found 25519 results on 1021 pages for 'virtual machine'.

Page 311/1021 | < Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >

  • InfiniBand Enabled Diskless PXE Boot

    - by Neeraj Gupta
    When you want to bring up a compute server in your environment and need InfiniBand connectivity, usually you go through various installation steps. This could involve operating systems like Linux, followed by a compatible InfiniBand software distribution, associated dependencies and configurations. What if you just want to run some InfiniBand diagnostics or troubleshooting tools from a test machine ? What if something happened to your primary machine and while recovering in rescue mode, you also need access to your InfiniBand network ? Often times we use opensource community supported small Linux distributions but they don't come with required InfiniBand support and tools. In this weblog, I am going to provide instructions on how to add InfniBand support to a specific Linux image - Parted Magic.This is a free to use opensource Linux distro often used to recover or rescue machines. The distribution itself will not be changed at all. Yes, you heard it right ! I have built an InfiniBand Add-on package that will be passed to the default kernel and initrd to get this all working. Pr-requisites You will need to have have a PXE server ready on your ethernet based network. The compute server you are trying to PXE boot should have a compatible IB HCA and must be connected to an active IB network. Required Downloads Download the Parted Magic small distribution for PXE from Parted Magic website. Download InfiniBand PXE Add On package. Right Click and Download from here. Do not extract contents of this file. You need to use it as is. Prepare PXE Server Extract the contents of downloaded pmagic distribution into a temporary directory. Inside the directory structure, you will see pmagic directory containing two files - bzImage and initrd.img. Copy this directory in your TFTP server's root directory. This is usually /tftpboot unless you have a different setup. For Example: cp pmagic_pxe_2012_2_27_x86_64.zip /tmp cd /tmp unzip pmagic_pxe_2012_2_27_x86_64.zip cd pmagic_pxe_2012_2_27_x86_64 # ls -l total 12 drwxr-xr-x  3 root root 4096 Feb 27 15:48 boot drwxr-xr-x  2 root root 4096 Mar 17 22:19 pmagic cp -r pmagic /tftpboot As I mentioned earlier, we dont change anything to the default pmagic distro. Simply provide the add-on package via PXE append options. If you are using a menu based PXE server, then add an entry to your menu. For example /tftpboot/pxelinux.cfg/default can be appended with following section. LABEL Diskless Boot With InfiniBand Support MENU LABEL Diskless Boot With InfiniBand Support KERNEL pmagic/bzImage APPEND initrd=pmagic/initrd.img,pmagic/ib-pxe-addon.cgz edd=off load_ramdisk=1 prompt_ramdisk=0 rw vga=normal loglevel=9 max_loop=256 TEXT HELP * A Linux Image which can be used to PXE Boot w/ IB tools ENDTEXT Note: Keep the line starting with "APPEND" as a single line. If you use host specific files in pxelinux.cfg, then you can use that specific file to add the above mentioned entry. Boot Computer over PXE Now boot your desired compute machine over PXE. This does not have to be over InfiniBand. Just use your standard ethernet interface and network. If using menus, then pick the new entry that you created in previous section. Enable IPoIB After a few minutes, you will be booted into Parted Magic environment. Open a terminal session and see if InfiniBand is enabled. You can use commands like: ifconfig -a ibstat ibv_devices ibv_devinfo If you are connected to InfiniBand network with an active Subnet Manager, then your IB interfaces must have come online by now. You can proceed and assign IP address to them. This will enable you at IPoIB layer. Example InfiniBand Diagnostic Tools I have added several InfiniBand Diagnistic tools in this add-on. You can use from following list: ibstat, ibstatus, ibv_devinfo, ibv_devices perfquery, smpquery ibnetdiscover, iblinkinfo.pl ibhosts, ibswitches, ibnodes Wrap Up This concludes this weblog. Here we saw how to bring up a computer with IPoIB and InfiniBand diagnostic tools without installing anything on it. Its almost like running diskless !

    Read the article

  • General website publishing questions involving domain forwarding issue

    - by Gorgeousyousuf
    Even though I have been having a certain level of knowledge and experience about web development I have never interested in obtaining a domain and publishing a website from my own server. Since today I have been struggling with getting my own domain and configuring it utilizing web sources. I started with learning the outline of web publishing process including web server installation, deploying a website for testing purpose,router port forwarding, getting a domain and forwarding domain to my router which will also forward http requests to my web server I am confused about some parts and so far could not get the web site accessed from outside of the network. All I try to do is just for learning purpose so I do not pay much attention to security issues for now. I have Server 2008 and IIS 7.5 installed. I use a laptop and have access to the modem over wireless and my modem is Zoom x6 5590. Well I will continue explaining what I have done so far and what I think will be after each action I did, I have successfully had access to my website on any local computer entering the internal ip address and port pair of the host machine in a browser. Next, I forwarded port 80 of my host machine creating a virtual server like 10.0.0.x(internal ip(static) of the host) - tcp - start port : 80 - end port : 80 in router options. Now I suppose every request that will come to the public Ip on port 80 will be forwarded to my host machine(10.0.0.x) over port 80. So If everyhing went as desired, the website listening on port 80 will accept the request and process the issue and finally respond bla bla bla... I suppose to access my website from outside of the network by entering http://MyPublicIp:80 in a browser but I couldn't accomplish this task by now despite using godady's domain forwarding tool,I see a small view of my website when I click the "preview" button that checks whether the address(http://publicip/Index.aspx) I entered where my domain will be forwarded is available or not. I am sure that configuring domain does not play a role in solving such a problem since using public ip and port matching does not help. So here is the first question, What is the fact that I face this problem? After that, I have couple of question regarding domain forwarding using godaddy tool. Can I forward my domain to a any port for example port 8080 other than default http port 80? Additionally, can I use a sub-domain to forward to a different port of the host? What I want to design is if the client enters www.mydomain.com, website1 will respond over a specified port and after when a client enters info.mydomain.com, another website which listens on different port will respond. I tried to add a sub-domain and forward it to a address like http://www.mydomain.com:8080/Index.aspx with no success. Can I really do that? Finally, what if I have a ftp site listening on the default port 21 and I create a domain like ftp.mydomain.com that will forward to that ftp site address. Is it possible to use sub-domains for ftp site access? I know I am more than confused but no matter whatever and however you reply to me, you will help me have a more clear view on this subject. Thank you very much from now.

    Read the article

  • What DX level does my graphics card support? Does it go to 11?

    - by Daniel Moth
    Recently I run into a situation that I have run into quite a few times. Someone encounters a machine and the question arises: "Is there a DirectX 11 card in this machine?". Typically the reason you are interested in that is because cards with DirectX 11 drivers fully support DirectCompute (and by extension C++ AMP) for GPGPU programming. The driver specifically is WDDM (1.1 on Windows 7 and Windows 8 introduces WDDM 1.2 with cool new capabilities). There are many ways for figuring out if you have a DirectX11 card, so here are the approaches that you can use, with a bonus right at the end of the post. Run DxDiag WindowsKey + R, type DxDiag and hit Enter. That is the DirectX diagnostic tool, which unfortunately, only tells you on the "System" tab what is the highest version of DirectX installed on your machine. So if it reports DirectX 11, that doesn't mean you have a DX11 driver! The "Display" tab has a promising "DDI version" label, but unfortunately that doesn't seem to be accurate on the machines I've tested it with (or I may be misinterpreting its use). Either way, this tool is not the one you want for this purpose, although it is good for telling you the WDDM version among other things. Use the Microsoft hardware page There is a Microsoft Windows 7 compatibility center, that lists all hardware (tip: use the advanced search) and you could try and locate your device there… good luck. Use Wikipedia or the hardware vendor's website Use the Wikipedia page for the vendor cards, for both nvidia and amd. Often this information will also be in the specifications for the cards on the IHV site, but is is nice that wikipedia has a single page per vendor that you can search etc. There is a column in the tables for API support where you can see the DirectX version. Check if it is one of these recommended DX11 cards You may not have a DirectX 11 card and are interested in purchasing one. While I am in no position to make recommendations, I will list here some cards from two big IHVs that we know are DirectX 11 capable. Some AMD (aka ATI) cards Low end, inexpensive DX11 hardware: Radeon 5450, 5550, 6450, 6570 Mid range (decent perf, single precision): Radeon 5750, 5770, 6770, 6790 High end (capable of double precision): Radeon 5850, 5870, 6950, 6970 Single precision APUs: AMD E-Series APUs AMD A-Series APUs Some NVIDIA cards Low end, inexpensive DX11 hardware: GeForce GT430, GT 440, GT520, GTS 450 Quadro 400, 600 Mid-range (decent perf, single precision): GeForce GTX 460, GTX 550 Ti, GTX 560, GTX 560 Ti Quadro 2000 High end (capable of double precision): GeForce GTX 480, GTX 570, GTX 580, GTX 590, GTX 595 Quadro 4000, 5000, 6000 Tesla C2050, C2070, C2075 Get the DirectX SDK and run DirectX Caps Viewer Download and install the June 2010 DirectX SDK. As part of that you now have the DirectX Capabilities Viewer utility (find it in your start menu by searching for "DirectX Caps Viewer", the filename is DXCapsViewer.exe). It will list all your devices (emulated, and real hardware ones) under the first node. Expand the hardware entries and then expand again the Direct3D 11 folder. If you see D3D_FEATURE_LEVEL_11_ under that, then your card supports feature level 11 which means it supports DirectCompute and C++ AMP. In the following screenshot of one of my old laptops, the card only goes to feature level 10. Run a utility from the web that just tells you! Of course, writing some C++ AMP code that enumerates accelerators and lists the ones that are capable is trivial. However that requires that you have redistributed the runtime, so a more broadly applicable approach is to use the DX APIs directly to enumerate the DX11 capable cards. That is exactly what the development lead for C++ AMP has done and he describes and shares that utility at this post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • HOWTO Turn off SPARC T4 or Intel AES-NI crypto acceleration.

    - by darrenm
    Since we released hardware crypto acceleration for SPARC T4 and Intel AES-NI support we have had a common question come up: 'How do I test without the hardware crypto acceleration?'. Initially this came up just for development use so developers can do unit testing on a machine that has hardware offload but still cover the code paths for a machine that doesn't (our integration and release testing would run on all supported types of hardware anyway).  I've also seen it asked in a customer context too so that we can show that there is a performance gain from the hardware crypto acceleration, (not just the fact that SPARC T4 much faster performing processor than T3) and measure what it is for their application. With SPARC T2/T3 we could easily disable the hardware crypto offload by running 'cryptoadm disable provider=n2cp/0'.  We can't do that with SPARC T4 or with Intel AES-NI because in both of those classes of processor the encryption doesn't require a device driver instead it is unprivileged user land callable instructions. Turns out there is away to do this by using features of the Solaris runtime loader (ld.so.1). First I need to expose a little bit of implementation detail about how the Solaris Cryptographic Framework is implemented in Solaris 11.  One of the new Solaris 11 features of the linker/loader is the ability to have a single ELF object that has multiple different implementations of the same functions that are selected at runtime based on the capabilities of the machine.  The alternate to this is having the application coded to call getisax() and make the choice itself.  We use this functionality of the linker/loader when we build the userland libraries for the Solaris Cryptographic Framework (specifically libmd.so, and the unfortunately misnamed due to historical reasons libsoftcrypto.so) The Solaris linker/loader allows control of a lot of its functionality via environment variables, we can use that to control the version of the cryptographic functions we run.  To do this we simply export the LD_HWCAP environment variable with values that tell ld.so.1 to not select the HWCAP section matching certain features even if isainfo says they are present.  For SPARC T4 that would be: export LD_HWCAP="-aes -des -md5 -sha256 -sha512 -mont -mpul" and for Intel systems with AES-NI support: export LD_HWCAP="-aes" This will work for consumers of the Solaris Cryptographic Framework that use the Solaris PKCS#11 libraries or use libmd.so interfaces directly.  It also works for the Oracle DB and Java JCE.  However does not work for the default enabled OpenSSL "t4" or "aes-ni" engines (unfortunately) because they do explicit calls to getisax() themselves rather than using multiple ELF cap sections. However we can still use OpenSSL to demonstrate this by explicitly selecting "pkcs11" engine  using only a single process and thread.  $ openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 54170.81k 187416.00k 489725.70k 805445.63k 1018880.00k $ LD_HWCAP="-aes" openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 29376.37k 58328.13k 79031.55k 86738.26k 89191.77k We can clearly see the difference this makes in the case where AES offload to the SPARC T4 was disabled. The "t4" engine is faster than the pkcs11 one because there is less overhead (again on a SPARC T4-1 using only a single process/thread - using -multi you will get even bigger numbers). $ openssl speed -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 85526.61k 89298.84k 91970.30k 92662.78k 92842.67k Yet another cool feature of the Solaris linker/loader, thanks Rod and Ali. Note these above openssl speed output is not intended to show the actual performance of any particular benchmark just that there is a significant improvement from using hardware acceleration on SPARC T4. For cryptographic performance benchmarks see the http://blogs.oracle.com/BestPerf/ postings.

    Read the article

  • Webcast Q&A: Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter

    - by kellsey.ruppel
    Last Thursday we had the third webcast in our WebCenter in Action webcast series, "Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter", where customer Sean Mattson from HDS and Rob Vandenberg from Oracle Partner Lingotek shared how Oracle WebCenter is powering Hitachi Data System’s externally facing website and providing a seamless experience for their customers. In case you missed it, here's a recap of the Q&A.   Sean Mattson, Hitachi Data Systems  Q: Did you run into any issues in the deployment of the platform?A: There were some challenges, we were one of the first enterprise ‘on premise’ installations for Lingotek and our WebCenter platform also has a lot of custom features.  There were a lot of iterations and back and forth working with Lingotek at first.  We both helped each other, learned a lot and in the end managed to resolve all issues and roll out a very compelling solution for HDS. Q: What has been the biggest benefit your end users have seen?A: Being able to manage and govern the content lifecycle globally and centrally and at the same time enabling the field to update, review and publish the incremental content changes without a lot of touchpoints has helped us streamline and simplify the entire publishing process. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: I wouldn't say resistance as much as skepticism that we could actually deploy an automated and self publishing solution.  Even if a solution is great, adoption of a new process can be a challenge and we are still pursuing our adoption targets.  One of the most important aspects is to include lots of training and support materials and offer as much helpdesk type support as needed to get the field self sufficient and confident in the capabilities of the system.  Rob Vandenberg, Lingotek  Q: Are there any limitations regarding supported languages such as support for French Canadian and Indian languages?A: Lingotek supports all language pairs. Including right to left languages and double byte languages such as Chinese, Japanese and Korean Q: Is the Lingotek solution integrated with the new 11g release of WebCenter Sites? A: Yes! In fact, Lingotek is the first OVI partner for Oracle WebCenter Sites  Q: Can translation memories help to improve the accuracy of machine translation?A: One of the greatest long term strategic benefits of using Lingotek is the accumulation of translation memories, or past human translations. These TMs can be used to "train" statistical machine translation engines to have higher and higher quality. This virtuous cycle is ongoing and will consistently improve both machine and human translations.  Q: We have existing translation memories from previous work with our translation service provider. Can they be easily imported in to the Lingotek solution for re-use? Q: Yes, Lingotek is standards compliant. We support TM import in both the TMX and XLIFF formats. Q: If we use Lingotek as a service to do our professional translation and also use the Lingotek software solution, do we get the translation memories to give us a means of just translating future adds and changes ourselves? A: Yes, all the data is yours, always. Lingotek can provide both the integrated translation software as well as the professional translation services. All the content and translation memories are yours. Q: Can you give us an example of where community translation has proved to be successful?A: The key word here is community. If you have a community that cares about you, your content, and the rest of the community, then community translation can work for you. We've seen effective use cases in Product User Groups content, Support Communities, and other types of User Generated content, like wikis and blogs.   If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!   Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter from Oracle WebCenter

    Read the article

  • What DX level does my graphics card support? Does it go to 11?

    - by Daniel Moth
    Recently I run into a situation that I have run into quite a few times. Someone encounters a machine and the question arises: "Is there a DirectX 11 card in this machine?". Typically the reason you are interested in that is because cards with DirectX 11 drivers fully support DirectCompute (and by extension C++ AMP) for GPGPU programming. The driver specifically is WDDM (1.1 on Windows 7 and Windows 8 introduces WDDM 1.2 with cool new capabilities). There are many ways for figuring out if you have a DirectX11 card, so here are the approaches that you can use, with a bonus right at the end of the post. Run DxDiag WindowsKey + R, type DxDiag and hit Enter. That is the DirectX diagnostic tool, which unfortunately, only tells you on the "System" tab what is the highest version of DirectX installed on your machine. So if it reports DirectX 11, that doesn't mean you have a DX11 driver! The "Display" tab has a promising "DDI version" label, but unfortunately that doesn't seem to be accurate on the machines I've tested it with (or I may be misinterpreting its use). Either way, this tool is not the one you want for this purpose, although it is good for telling you the WDDM version among other things. Use the Microsoft hardware page There is a Microsoft Windows 7 compatibility center, that lists all hardware (tip: use the advanced search) and you could try and locate your device there… good luck. Use Wikipedia or the hardware vendor's website Use the Wikipedia page for the vendor cards, for both nvidia and amd. Often this information will also be in the specifications for the cards on the IHV site, but is is nice that wikipedia has a single page per vendor that you can search etc. There is a column in the tables for API support where you can see the DirectX version. Check if it is one of these recommended DX11 cards You may not have a DirectX 11 card and are interested in purchasing one. While I am in no position to make recommendations, I will list here some cards from two big IHVs that we know are DirectX 11 capable. Some AMD (aka ATI) cards Low end, inexpensive DX11 hardware: Radeon 5450, 5550, 6450, 6570 Mid range (decent perf, single precision): Radeon 5750, 5770, 6770, 6790 High end (capable of double precision): Radeon 5850, 5870, 6950, 6970 Single precision APUs: AMD E-Series APUs AMD A-Series APUs Some NVIDIA cards Low end, inexpensive DX11 hardware: GeForce GT430, GT 440, GT520, GTS 450 Quadro 400, 600 Mid-range (decent perf, single precision): GeForce GTX 460, GTX 550 Ti, GTX 560, GTX 560 Ti Quadro 2000 High end (capable of double precision): GeForce GTX 480, GTX 570, GTX 580, GTX 590, GTX 595 Quadro 4000, 5000, 6000 Tesla C2050, C2070, C2075 Get the DirectX SDK and run DirectX Caps Viewer Download and install the June 2010 DirectX SDK. As part of that you now have the DirectX Capabilities Viewer utility (find it in your start menu by searching for "DirectX Caps Viewer", the filename is DXCapsViewer.exe). It will list all your devices (emulated, and real hardware ones) under the first node. Expand the hardware entries and then expand again the Direct3D 11 folder. If you see D3D_FEATURE_LEVEL_11_ under that, then your card supports feature level 11 which means it supports DirectCompute and C++ AMP. In the following screenshot of one of my old laptops, the card only goes to feature level 10. Run a utility from the web that just tells you! Of course, writing some C++ AMP code that enumerates accelerators and lists the ones that are capable is trivial. However that requires that you have redistributed the runtime, so a more broadly applicable approach is to use the DX APIs directly to enumerate the DX11 capable cards. That is exactly what the development lead for C++ AMP has done and he describes and shares that utility at this post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • SSMS Tools Pack 2.7 is released. New website, improved licensing and features.

    - by Mladen Prajdic
    New website Nice, isn't it? Cleaner, simpler, better looking and more modern. If you have any suggestions for further improvements I'd be glad to hear them. Simpler licensing With SSMS tools Pack 2.7 the licensing is finally where it should be. It is now based on the activate/deactivate model. This way you can move a license from machine to machine with simple deactivation on one and reactivation on another machine. Much better, no? Because of very good feedback I have added an option for 6 machines and lowered the 4 machines option to 3 machines. This should make it much simpler for you to choose the right option for yourself. Improved features Version 2.5.3 was already extremely stable and 2.7 continues with that tradition. Because of that I could fully focus on features and why 3.0 will rock even more that 2.7! ;) In version 2.7 I have addressed quite a few improvements you were requesting for a while now. SQL History This is probably the biggest time saver out there, therefore it's only fair it gets a few important updates. If you have an existing .sql file opened, the Window Content History now saves your code to that existing file and also makes a backup in the SQL History log default location. Search is still done through the SQL History log but the Tab Sessions Restore opens your existing .sql file. This way you don't have to remember to save your existing files by yourself anymore. A bug when you couldn't search properly if you copied the log files to a new location was fixed. Unfortunately this removed the option to filter a search with the time component. The smallest search interval is now one day. The SSMS Tools Pack now remembers the visibility of the Current Window History window when you exit SSMS. SQL Snippets You can now set the position of the cursor in your snippets by placing {C} somewhere in your snippet. It's a small improvement but can be a huge time saver since you don't have to move through the snippet to the desired location anymore. Run script on multiple databases Database choices can now be saved with a name and then loaded again next time. You can also choose to run the script in a new window for each chosen database. Search through grid results You can now go previous/next search result with the Prev/Next control inside the search window. This is extremely useful if you have a large resultset. IT saves you the scrolling. CRUD generator Four new variables have been added: |CurrentDate| writes current date in format yyyy-MM-dd to your script |CurrentTime| writes current time in 24h format HH:mm:ss to your script |CurrentWinUser| writes current Windows logged on user to your script |CurrentSqlUser| writes current SQL logged on login to your script This was actually quite a requested feature so if you have any other ideas for extra variables, do let me know. That's about it. I hope you're going to enjoy this version as much as the previous ones. Have fun!

    Read the article

  • Oracle 11g R2 1???????~?????????????

    - by Yusuke.Yamamoto
    ??2010?11?17???Oracle Database 11g Release2(R2) ???????1???? ????Oracle Database 11g R2 ?????????????????????????? ???98?????????1????????? ???????98?????????????????·?????·??????????! ???? 2010/11/17:????? 2011/01/07:???????(??) ?? Oracle Database 11g R2 ??????? Oracle Database 11g ?????????(????) ??????? Oracle Database 11g R2(???/????) Oracle Database 11g R2 ??????? ?? ??? 2009?11?11? Oracle Exadata Database Machine Version 2 ???? 2009?11?17? Oracle Database 11g R2 ???? 2010?02?01? ?????????????????????????????? 2010?03?31? SAP ? Oracle Database 11g R2 ??????????ISV????????·??????????? 2010?05?18? Windows Server 2008 R2 / Windows 7 ?????????Oracle Database 10g R2 ??? 2010?06?23? Oracle Application Express 4.0 ????(??) 2010?07?09? IDC Japan:2009???? Windows RDBMS ?????????????? 2010?08?17? TPC-C Benchmark Price/Performance ????????(??) 2010?09?13? Patch Set 11.2.0.2 for Linux ????(??) 2010?10?20? Oracle Exadata Database Machine X2 ???? 2010?11?17? Oracle Database 11g R2 ????1?? Oracle Database 11g ?????????(????) ????????????????????????????????(????)? ???? ????? ????(???) ?????·???????·??? ????? ??????????? ???? ??? ????????(???) ?????? Oracle Exadata Database Machine ????? Oracle Database 11g ??(????)? ???? ?????·???????·??? ?? ????????? ?????????? ???? ?????? ????(????·????????) ??????????? ???????????? ????? ???? Customer Voice ????:????IT?????24??365????????????????????? ?Oracle9i Database ?????????????????????Oracle Database 11g ???????????????????????? Oracle9i Database ???????????????? Customer Voice ??????:Oracle Database 11g????????????????????? ?Oracle ASM ???????????????????I/O????????????????????????????????????? ??????? Oracle Database 11g R2(???/????) ???????????????? Oracle 11g R2 ????????? - IT Leaders ??????????11g R2?5???? - ??SE????Oracle??? - Think IT ??????????? Oracle Database 11g Release 2(11gR2) ???????|???????????

    Read the article

  • Error: "The website declined to show this webpage" with AjaxControlToolkit 3.5

    - by Vijay
    What I have? I have a ASP.NET page deployed in layouts folder of 12 hive in SharePoint. This page makes use of Accordion control in AjaxControlToolkit.dll V3.5.40412.2. I have placed the page code behind class assembly and AjaxControlToolkit.dll in Virtual Directory bin folder. What I want? I want to load this page when a link clicked from a web part for users of "Visitors" site group when the DLLs are placed in virtual directory bin folder. What problem am I facing? The page loads properly for administrator. But, for "Visitors", it shows "The website declined to show this webpage" error message. In these scenarios the page works fine for "Visitors": If I place both the assemblies in GAC If I give Everyone read permission to AjaxControlToolkit.dll (in bin) Am I missing something here?

    Read the article

  • Android: Haptic feedback: onClick() event vs hapticFeedbackEnabled in the view

    - by dreeves
    If you want a button to provide haptic feedback (ie, the phone vibrates very briefly so you can feel that you really pushed the button), what's the standard way to do that? It seems you can either explicitly set an onClick() event and call the vibrate() function, giving a number of milliseconds to vibrate, or you can set hapticFeedbackEnabled in the view. The documentation seems to indicate that the latter only works for long-presses or virtual on-screen keys: http://developer.android.com/reference/android/view/View.html#performHapticFeedback(int) If that's right, then I need to either make my button a virtual on-screen key or manually set the onClick() event. What do you recommend? Also, if I want the vibrating to happen immediately when the user's finger touches the button, as opposed to when their finger "releases" the button, what's the best way to accomplish that? Related question: http://stackoverflow.com/questions/2228151/how-to-enable-haptic-feedback-on-button-view

    Read the article

  • VS 2010 VSTO Add in for EXCEL 2007 Won't load

    - by Erick
    Hi everyone, We have an application that is built with Excel as the front end using the Office object model. We were using a C++ shim to load it as a COM add in for Excel 2003, but I've updated it to use the latest VSTO for Excel 2007. I've also been using VS 2010 for the latest version. The problem is that everything works great on my dev machine in debugger mode as well as just launching Excel 2007, but I cannot get it to run on any other machine (my current target machine is Win7, development is XP). I've created a ClickOnce deployment of the Addin, and I can see it in the list of COM Addins, but when I check on it to load it nothing happens. I re-open the Addins manager and it is un-checked. I've also tried setting in in the registry, but as soon as I run it, it sets the registry back to do not load. I've tried everything I can think of and searched all over the web but no dice. Any help would be appreciated!

    Read the article

  • IIS6 - 301 permanent redirect response accessing an asp.net web site

    - by omatrot
    I've installed an ASP.NET 2.0 virtual directory application on the default web site in IIS6 (Windows 2003 computer) with a Visual Studio generated web deployment setup. Unfortunately, following a successfull installation, I'm unable to access our web site. I get a permanent redirect from IIS as shown in the log below : 2010-02-25 15:32:41 W3SVC1 127.0.0.1 GET /naweb3 - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+Trident/4.0;+.NET+CLR+1.1.4322;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.30;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729) 301 0 0 This is the first time I have this kind of problem. Before I go with completely resetting IIS6 on this machine, I'm wondering what could cause this problem. The virtual directory configuration seems fine to me, no redirection at all at this level. Any help appreciated. TIA.

    Read the article

  • Unable to validate data. at System.Web.Configuration.MachineKeySection.GetDecodedData

    - by Ben Williams
    I have several websites which get approximately 3000 pageviews in total per day, and I get this viewstate error roughly 5-10 times per day, caught in global.asax: System.Web.HttpException: Unable to validate data. at System.Web.Configuration.MachineKeySection.GetDecodedData(Byte[] buf, Byte[] modifier, Int32 start, Int32 length, Int32& dataLength) at System.Web.UI.ObjectStateFormatter.Deserialize(String inputString) I have tried: hard-coding the machine key in web.config for all websites hard-coding the machien key in machine.config adding items to the pages section of the web.config for all websites. Machine key looks like: <machineKey validationKey="key goes here" decryptionKey="key goes here" validation="SHA1" decryption="AES" /> Pages section looks like: <pages renderAllHiddenFieldsAtTopOfForm="true" validateRequest="false" enableEventValidation="false" viewStateEncryptionMode="Never"> The errors are not related to application pool recycling as best I can tell, as the pool is set to recycle at every 100,000 requests. I am not running a web farm or web garden. Quite often I get two or three of these errors in a row, as if a user is getting an error, going back, and then clicking the link again. Anyone have any ideas?

    Read the article

  • Deserializing JSON into an object with Json.NET

    - by hmemcpy
    Hello. I'm playing a little bit with the new StackOverflow API. Unfortunately, my JSON is a bit weak, so I need some help. I'm trying to deserialize this JSON of a User: {"user":{ "user_id": 1, "user_type": "moderator", "creation_date": 1217514151, "display_name": "Jeff Atwood", ... "accept_rate": 100 }} into an object which I've decorated with JsonProperty attributes: [JsonObject(MemberSerialization.OptIn)] public class User { [JsonProperty("user_id", Required = Required.Always)] public virtual long UserId { get; set; } [JsonProperty("display_name", Required = Required.Always)] public virtual string Name { get; set; } ... } I get the following exception: Newtonsoft.Json.JsonSerializationException: Required property 'user_id' not found in JSON. Is this because the JSON object is an array? If so, how can I deserialize it to the one User object? Thanks in advance!

    Read the article

  • Printer Redirection from 2003 Terminal Server to 2008 Terminal Server

    - by xmaveric
    Our environment is a terminal server cluster (Win2003 servers) that everyone connects to do do their work. I have set up a new Win2008 R2 machine with the intention of using it to publish our main application to the TS farm. The idea was to keep this server dedicated to one application to avoid driver/dll conflicts with other software. I created a RemoteApp on the new server and made an .rdp file and placed it on the desktop of our TS farm servers. The problem I am running into is that when I connect to the RemoteApp, it doesn't show the printers that are installed on the TS server I am connecting from. We have over 20 printers installed on our TS servers, each with different drivers and permissions. I really do not want to reinstall all of these on the RemoteApp server so I was hoping Printer Redirection would handle this. It would appear that because the RDP client for Server 2003 x64 is 6.0, that version doesn't support the Easy Print feature (requires 6.1). I can't find any newer version on the MS site to download for Win2003 x64. How can I get the printers on the TS farm machine to redirect so they are viewed by the RemoteApp machine?

    Read the article

  • Authenticating Windows 7 against MIT Kerberos 5

    - by tommed
    Hi There, I've been wracking my brains trying to get Windows 7 authenticating against a MIT Kerberos 5 Realm (which is running on an Arch Linux server). I've done the following on the server (aka dc1): Installed and configured a NTP time server Installed and configured DHCP and DNS (setup for the domain tnet.loc) Installed Kerberos from source Setup the database Configured the keytab Setup the ACL file with: *@TNET.LOC * Added a policy for my user and my machine: addpol users addpol admin addpol hosts ank -policy users [email protected] ank -policy admin tom/[email protected] ank -policy hosts host/wdesk3.tnet.loc -pw MYPASSWORDHERE I then did the following to the windows 7 client (aka wdesk3): Made sure the ip address was supplied by my DHCP server and dc1.tnet.loc pings ok Set the internet time server to my linux server (aka dc1.tnet.loc) Used ksetup to configure the realm: ksetup /SetRealm TNET.LOC ksetup /AddKdc dc1.tnet.loc ksetip /SetComputerPassword MYPASSWORDHERE ksetip /MapUser * * After some googl-ing I found that DES encryption was disabled by Windows 7 by default and I turned the policy on to support DES encryption over Kerberos Then I rebooted the windows client However after doing all that I still cannot login from my Windows client. :( Looking at the logs on the server; the request looks fine and everything works great, I think the issue is that the response from the KDC is not recognized by the Windows Client and a generic login error appears: "Login Failure: User name or password is invalid". The log file for the server looks like this (I tail'ed this so I know it's happening when the Windows machine attempts the login): If I supply an invalid realm in the login window I get a completely different error message, so I don't think it's a connection problem from the client to the server? But I can't find any error logs on the Windows machine? (anyone know where these are?) If I try: runas /netonly /user:[email protected] cmd.exe everything works (although I don't get anything appear in the server logs, so I'm wondering if it's not touching the server for this??), but if I run: runas /user:[email protected] cmd.exe I get the same authentication error. Any Kerberos Gurus out there who can give me some ideas as to what to try next? pretty please?

    Read the article

  • ODBC Connection String Problem

    - by Brett
    Hi there, I am having major trouble connecting to my database via ODBC. The db is local (but I have a mirror on a virtual machine), so I am trying to use the connectionstring: Dsn=MonetDB;host=TARBELL where TARBELL is the name of my computer. However, it doesn't connect. BUT, this string does: Dsn=MonetDB;host=localhost as does Dsn=MonetDB Can anyone explain this? I am at a complete loss. I have taken down my firewalls (at least until I get this figured out), so that can't be the problem. I eventually want to change the TARBELL to the mirrored virtual machine running another instance of the database. Many thanks, Brett

    Read the article

  • Algorithms for City Simulation?

    - by anon
    I want to create a city filled with virtual creatures. Say like Sim City, where each creature walks around, doing it's own tasks. I'd prefer the city to not 'explode' or do weird things -- like the population dies off, or the population leaves, or any other unexpected crap. Is there a set of basic rules I can encode each agent with so that the city will be 'stable'? (Much like how for physics simulations, we have some basic rules that govern everything; is there a set of rules that governs how a simulation of a virtual city will be stable?) I'm new to this area and have no idea what algorithms/books to look into. Insights deeply appreciated. Thanks!

    Read the article

  • ssh tunneling with visualsvn

    - by DeveloperChris
    I have been asked to setup visualsvn for visual studio 2008 Due to firewall restrictions and server configuration. I need to use ssh tunneling. My problem is this. The local machine needs to connect to a gateway machine via ssh then connect to the subversion server so Local machine ---{ssh}--- gateway ---{ssh}-- subversion server I am not exactly sure of the correct process to do this. It appears that I must start a ssh process using plink to open a local port and forward that to the remote subversion server. eg: plink user@gateway -L 22:192.168.1.1:22 Then when visualsvn starts it uses tortoiseplink to make the actual connection through to the subversion server using svn+ssh://username@localhost:22/myrepo This seems very very clunky. firstly it needs several steps to setup the connection secondly I need plink running which leaves a command prompt on the desktop (clutter = yuck) lastly I need to use two different programs that do the same thing. (plink + tortoiseplink) The problem is that tortoiseplink doesn't run in the background. As soon as I connect to the ssh gateway and enter the password it closes again. So I can't use it to create the initial connection. If I use plink instead of tortoiseplink in visualsvn then I never get prompted for the password. so it just hangs with an open command prompt and no password request. Is there a way to setup visualsvn so that everything happens in one command line? I have searched high and low for a suitable and clean method to tunnel from visualsvn to the remote server and have found very little. it all either assumes one hop (not two like mine) or it glosses over all the hard bits. DC

    Read the article

  • Single Sign On with Forms Authentication

    - by Christo Fur
    I am trying to set up Single sign on for 2 websites that reside on the same domain e.g. http://mydomain (top level site that contains a forms-auth login page) http://mydomain/admin (seperately developed website residing in a Virtual Application within the parent website) Have read a few articles on Single Sign on e.g. http://www.codeproject.com/KB/aspnet/SingleSignon.aspx And they seem to suggest it is just a case of having the same machinekey section in each web.config so that the cookie encryprion and decryption is the same for each application I have set this up and I never get prompted for credentials in the sub-website (the virtual application) I always get prompted in the parent site. In addition to having the same machinekey I've also tried adding the same <authentication> and <authorisation> elements Any idea what I could be missing?

    Read the article

  • MVC2 DataAnnotations validation with inheritance

    - by bhiku
    Hi, I have a .NET 2.0 class the properties of which are marked virtual.I need to use the class as a model in a MVC2 application. So, I have created a .NET 3.5 class inheriting from the .NET 2.0 class and added the DataAnnotations attributes to the overriden properties in the new class. A snippet of what I have done is below // .NET 2.0 class public class Customer { private string _firstName = ""; public virtual string FirstName { get { return _firstName; } set { _firstName = value; } } } // .NET 3.5 class public class MVCCustomer : Customer { [Required(ErrorMessage="Firstname is required")] public override string FirstName { get { return base.FirstName; } set { base.FirstName = value; } } } I have used the class as the model for a MVC2 view using the HtmlFor helpers. Serverside validation works correctly but the client side validation does not. Specifically the validation error is not displayed on the page. What am I missing, or is it only possible to do this using buddy classes. Thanks.

    Read the article

  • Segmentation fault on MPI, runs properly on OpenMP

    - by Bellman
    Hi, I am trying to run a program on a computer cluster. The structure of the program is the following: PROGRAM something ... CALL subroutine1(...) ... END PROGRAM SUBROUTINE subroutine1(...) ... DO i=1,n CALL subroutine2(...) ENDDO ... END SUBROUTINE SUBROUTINE subroutine2(...) ... CALL subroutine3(...) CALL subroutine4(...) ... END SUBROUTINE The idea is to parallelize the loop that calls subroutine2. Main program basically only makes the call to subroutine1 and only its arguments are declared. I use two alternatives. On the one hand, I write OpenMP clauses arround the loop. On the other hand, I add an IF conditional branch arround the call and I use MPI to share the results. In the OpenMP case, I add CALL KMP_SET_STACKSIZE(402653184) at the beginning of the main program and I can run it with 8 threads on an 8 core machine. When I run it (on the same 8 core machine) with MPI (either using 8 or 1 processors) it crashes just when makes the call to subroutine3 with a segmentation fault (signal 11) error. If I comment subroutine4, then it doesn't crash (notice that it crashed just when calling subroutine3 and it works when commenting subroutine4). I compile with mpif90 using MPICH2 libraries and the following flags: -O3 -fpscomp logicals -openmp -threads -m64 -xS. The machine has EM64T architecture and I use a Debian Linux distribution. I set ulimit -s hard before running the program. Any ideas on what is going on? Has it something to do with stack size? Thanks in advance

    Read the article

  • Why are my installed fonts not available in .NET?

    - by Dan Herbert
    I'm trying to render some images with text using a font I just added to my machine and no matter what I do, I can't seem to get the font to become accessible in .NET. I tried using PrivateFontCollection.AddFontFile(filename) and PrivateFontCollection.AddMemoryFont(...) to load the font into memory. Whenever I do this, the method throws a "File Not Found" exception, which is unusual because I get this exception when loading the font from memory, where there should be no files to be "not found". Initially, I thought it may be because the font I was trying to use was in the .pfm format, so I converted the font to .otf and had the same problem. Then I tried installing the .otf font to my Windows Fonts folder so I could pull it from FontFamily.Families. Once I installed the font, it became available in Microsoft Word & Notepad2. However, when I try to load it from FontFamily.Families, it is not included in the array. I thought rebooting my machine would fix the issue but obviously there is something more complicated involved here. Is there something basic I just might have missed when installing the font in my machine (Windows Vista), or is there another way to programmatically load a font that I should be using instead? Is .otf not supported in .NET?

    Read the article

  • glassfish v3.0 hangs no app is ever deployed and no error is ever shown

    - by Samuel Lopez
    I have a web app that uses JSF 2.0 with richFaces and primeFaces, hibernate and java and I use NetBeans 7.1.2 as the IDE when I run the app the glassfish server is started and the log shows this: Launching GlassFish on Felix platform Información: Running GlassFish Version: GlassFish Server Open Source Edition 3.1.2 (build 23) Información: Grizzly Framework 1.9.46 started in: 20ms - bound to [0.0.0.0:4848] Información: Grizzly Framework 1.9.46 started in: 32ms - bound to [0.0.0.0:8181] Información: Grizzly Framework 1.9.46 started in: 59ms - bound to [0.0.0.0:8080] Información: Grizzly Framework 1.9.46 started in: 32ms - bound to [0.0.0.0:3700] Información: Grizzly Framework 1.9.46 started in: 21ms - bound to [0.0.0.0:7676] Información: Registered org.glassfish.ha.store.adapter.cache.ShoalBackingStoreProxy for persistence-type = replicated in BackingStoreFactoryRegistry Información: SEC1002: Security Manager is OFF. Información: SEC1010: Entering Security Startup Service Información: SEC1143: Loading policy provider com.sun.enterprise.security.provider.PolicyWrapper. Información: SEC1115: Realm [admin-realm] of classtype [com.sun.enterprise.security.auth.realm.file.FileRealm] successfully created. Información: SEC1115: Realm [file] of classtype [com.sun.enterprise.security.auth.realm.file.FileRealm] successfully created. Información: SEC1115: Realm [certificate] of classtype [com.sun.enterprise.security.auth.realm.certificate.CertificateRealm] successfully created. Información: SEC1011: Security Service(s) Started Successfully Información: WEB0169: Created HTTP listener [http-listener-1] on host/port [0.0.0.0:8080] Información: WEB0169: Created HTTP listener [http-listener-2] on host/port [0.0.0.0:8181] Información: WEB0169: Created HTTP listener [admin-listener] on host/port [0.0.0.0:4848] Información: WEB0171: Created virtual server [server] Información: WEB0171: Created virtual server [__asadmin] Información: WEB0172: Virtual server [server] loaded default web module [] Información: Inicializando Mojarra 2.1.6 (SNAPSHOT 20111206) para el contexto '/test' Información: Hibernate Validator 4.2.0.Final Información: WEB0671: Loading application [test] at [/test] Información: CORE10010: Loading application test done in 4,885 ms Información: GlassFish Server Open Source Edition 3.1.2 (23) startup time : Felix (1,848ms), startup services(5,600ms), total(7,448ms) Información: JMX005: JMXStartupService had Started JMXConnector on JMXService URL service:jmx:rmi://SJ007:8686/jndi/rmi://SJ007:8686/jmxrmi Información: WEB0169: Created HTTP listener [http-listener-1] on host/port [0.0.0.0:8080] Información: Grizzly Framework 1.9.46 started in: 14ms - bound to [0.0.0.0:8080] Información: WEB0169: Created HTTP listener [http-listener-2] on host/port [0.0.0.0:8181] Información: Grizzly Framework 1.9.46 started in: 12ms - bound to [0.0.0.0:8181] but right there it hangs and the deploy bar keeps running but no more actions are shown, nothing else is logged either it just stays there until I stop the deploy Is there any other error log to debug glassfish server? Any thoughts? I have re installed glassfish and NetBeans but it all seems the same. I think this started happening after I had to force-restart my computer with NetBeans stil open and the app deployed, but it's hard to know for sure if this was the real catalyst. Any thoughts or help is appreciated thanks. Is it an app error? if so why no errors in the log are shown?

    Read the article

  • NHibernate Many-To-One on Joined Sublcass with Filter

    - by Nathan Roe
    I have a class setup that looks something like this: public abstract class Parent { public virtual bool IsDeleted { get; set; } } public class Child : Parent { } public class Other { public virtual ICollection<Child> Children { get; set; } } Child is mapped as a joined-subclass of Parent. Childen is mapped as a Many-To-One bag. The bag has a filter applied to it named SoftDeletableFilter. The filter mapping looks like: <filter-def name="SoftDeleteableFilter" condition="(IsDeleted = 0 or IsDeleted is null)" /> That problem is that when Other.Children is loaded the filter is being applied to the Child table and not the parent table. Is there any way to tell NHibernate to apply the filter to the parent class?

    Read the article

< Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >