Search Results

Search found 14745 results on 590 pages for 'setting'.

Page 304/590 | < Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >

  • BizTalk Schema Validation

    - by Christopher House
    Perhaps this one should be filed under:  Obvious Yesterday I created a new schema that is going to be used for a WCF receive.  The schema has a bunch of restrictions in it, with the intention that we'd validate incoming messages against the schema.  I'd never done message validation with BizTalk but I knew the XmlDisassembler component had an option for validating, so I figured it would be a piece of cake.  Sadly, that was not to be the case.  I deployed my artifacts and configured my receive location's XmlDisassembler with what I thought to be the correct document spec name.  I entered My.Project.Name.SchemaTypeName for the document spec and started running unit tests.  All of them failed with the following error logged in the event log: "WcfReceivePort_BizTalkWcfService/PurchaseOrderService" URI: "/BizTalkWcfService/PurchaseOrderService.svc" Reason: No Disassemble stage components can recognize the data. I went to the receive port and turned on tracking, submitted another message, then went to the admin console and saved the message.  It looked correct, but just to be sure, I manually validated it against the schema in my project.  As expected, it validated correctly. After a bit of thinking on this, I realized that I probably needed to fully qualify my document spec name, meaning, include the assembly name, as well as the type name.  So, I went back to the receive location and changed the document spec to: My.Project.Name.SchemaTypeName, My.Project.Name,Version=1.0.0.0, Culture=neutral, PublicKeyToken=xxxxxxxxx I re-ran my unit tests and everything was working as expected.  So, note to self:  remember to include the assembly name when setting the document spec.  If you need an easy way to determine your schema name and assembly name, find your schema in the admin console and go to it's properties.  On the property screen, look at the Name and Assembly properties.  Your document spec will be "SchemaName, AssemblyName"

    Read the article

  • Looking for way to download all the free software I need to setup my PC all at once . . .

    - by Jim McKeeth
    I am in the middle of reinstalling Windows 7, and I would like to install all the software in as few steps as possible. I saw a web site recently that listed a lot of different Freeware and Opensource appliations. There was a checkmark next to each one so you could select the ones you want, and then they would all download in a single package to install at one time. The idea was for when you were setting up after a fresh install. It was for Windows and they were adding 64-bit applications.

    Read the article

  • Seeking for a better solution to restrict access in GRUB2 menu

    - by LiveWireBT
    I just read that in certain situations you should also protect access to your GRUB2 menu by setting a password and may be refining acces by adding --unrestricted or --users as arguments to menuentries und submenus. I read the corresponding pages in the Ubuntu Community Documentation and the Arch Wiki. So, I created /etc/grub.d/01_security, stored usernames and passwords in there, made the file executable and ran update-grub. This is working as intended, every action in the menu prompts for username and password, but I also want to modify the automatically generated entries to either restrict them to certain users (via --users) or make them available for everyone, but not editable by everyone (via --unrestricted). I was able to find the proper lines in 10_linux and edit them accordingly, however I'd love to see an easier solution. Perhaps an option like GRUB_DISABLE_RECOVERY="true" or GRUB_DISABLE_OS_PROBER=true in /etc/default/grub for easy (re)configuration (for linux and os-prober generated entries). Here's a diff from my 13.10 installation: $ diff /etc/grub.d/10_linux /etc/grub.d/10_linux_bak 123c123 < echo "menuentry '$(echo "$title" | grub_quote)' ${CLASS} --unrestriced \$menuentry_id_option 'gnulinux-$version-$type-$boot_device_id' {" | sed "s/^$ --- > echo "menuentry '$(echo "$title" | grub_quote)' ${CLASS} \$menuentry_id_option 'gnulinux-$version-$type-$boot_device_id' {" | sed "s/^/$submenu_inde$ 125c125 < echo "menuentry '$(echo "$os" | grub_quote)' ${CLASS} --unrestricted \$menuentry_id_option 'gnulinux-simple-$boot_device_id' {" | sed "s/^/$submenu_$ --- > echo "menuentry '$(echo "$os" | grub_quote)' ${CLASS} \$menuentry_id_option 'gnulinux-simple-$boot_device_id' {" | sed "s/^/$submenu_indentation/" 323c323 < echo "submenu --unrestricted '$(gettext_printf "Advanced options for %s" "${OS}" | grub_quote)' \$menuentry_id_option 'gnulinux-advanced-$boot_device_$ --- > echo "submenu '$(gettext_printf "Advanced options for %s" "${OS}" | grub_quote)' \$menuentry_id_option 'gnulinux-advanced-$boot_device_id' {" tl;dr: I'd love the see a simple solution for GRUB2 entries that cannot be modified without a password or are limited to certain users. (Yes, GRUB_DISABLE_RECOVERY="true" is active.)

    Read the article

  • apt-get update stuck on "Waiting for Headers"

    - by crasic
    I'm setting up a Maverick server on a spare PC. The install completes fine and the system boots up into the shell. However, when I try to do a apt-get update , apt hangs on almost every entry with the message 99% [Waiting for headers] sometimes a message of 96 b/s appears on the far right. The actual percent that it claims also varies. Searching around online gave a potential solution by using the option Acquire::http::Pipeline-Depth="0" this somewhat alleviates the problem, i.e. it stalls on every other entry with the same message as above. If you wait it out (the whole update took about 4 hours), the update still fails as a good portion of the hits show a "unable to connect" or similar message, despite the fact that I can ping the server from the pc just fine. The problem is also unrelated to the mirror used since I've tried about a dozen mirrors with no success, I've even tried commenting out everything but the main entry in sources.list and it still refuses to update. The network connection is fine since I can ping and wget (apt won't let me install lynx until I run a successful update) just fine. I've also reinstalled the distro with no luck. The only thing weird about the setup is that the PC is connecting to the internet through my windows laptop with ICS configured properly, but as I've said before, the network connection is fine.

    Read the article

  • How do I configure NTLM authentication in Firefox on Linux?

    - by tolomea
    Our IT department have NTLM deployed through the intranet servers. I've set network.automatic-ntlm-auth.trusted-uris value in Firefox on some of the Windows machines and that works fine. However setting it in Firefox on the Linux machines is not working. This doesn't surprise me at all, I've no notion of where Firefox on Linux is supposed to get the authentication details from. So how is this process supposed to work? what bits of config / infrastructure am I missing?

    Read the article

  • How can I see logs in a server after a kernel panic hang?

    - by Low Kian Seong
    I am running a production gentoo Linux machine, and recently there was a situation where the server hung in my co-located premises and when I got there I noticed that the server was hung on what appeared to be a kernel panic hang. I rebooted the machine with a hard reboot and was disappointed to find out that I could not find a shred of evidence anywhere on why the machine hung. Is it true that when I do a hard reboot the messages itself will get lost or is there a setting I can do somewhere say in syslog-ng or maybe in sysctl to at least preserve the error log so that I can prevent such mishaps from happening in the future ? I am running a 2.6.x kernel by the way. Thanks in advance.

    Read the article

  • Is it Secure to Grant Apache User Ownership of Directories & Files for Wordpress

    - by Oudin
    I'm currently setting up WordPress on an Ubuntu server 12 everything runs fine but there is an issue when it comes to automatically updating and uploading media via WP as Apache "www-data" user does not have permissions to write to the directories. "user1" has full permission All my directories have permissions of 0755 and files 644 my directories setup is as follows: /home/user1/public_html All WP files and directories are in "public_html" In order to work around the auto updating and uploading media I've granted Apache user ownership to the following directories sudo chown www-data:www-data wp-content -R sudo chown www-data:www-data wp-includes -R sudo chown www-data:www-data wp-admin -R I would like to know security wise how secure this is and if it is not secure what would be the best solution? That will allow me to keep all files and directories owned by user1 and still allow wp to be able to automatically update and uploading media

    Read the article

  • DB auto failover in c# does not work when the principal server physically goes offline

    - by user62521
    I'm setting up DB auto failover in C# with SQL Server 2008 and I have a 'high safety with automatic failover mirror' using a witness setup and my connection string looks like "Server=tcp:DC01; Failover Partner=tcp:DC02; database=dbname; uid=sewebsite;pwd=somerndpwd;Connect Timeout=10;Pooling=True;" During testing, when I turn off the SQL Server service on the principal server the auto failover works like a charm, but if I take the principal server offline (by shutting down the server or killing the network card) auto failover does not work and my website just times out. I found this article where the second last post suggests that its because we are using named pipes which does not work when the principal goes offline, but we force TCP in our connection string. What am I missing to get this DB auto failover working?

    Read the article

  • AXFR problem using gradwell secondary DNS

    - by Roaders
    Hi All I use Gradwell.com to provide secondary DNS but I keep getting e-mails along the lines of the following saying that it's not working: You have asked us to provide a secondary DNS service for the following domain(s) Unfortunately, the primary DNS server(s) you specified are not permitting the necessary zone transfers from our servers, or they are not answering "SOA" queries for your domain correctly. I have gone through the support procedure and they weren't that helpful. They have suggested the following: Our secondline team have suggested setting the AXFR to use anouther machine. This will ensure that the transfer is not locked down to one machine and should allow any machine to make the request I don't really know what AFXR is and I only have 1 production machine so I can't set the AFXR to use another one! In previous support correspondence we confirmed that I am allowing transfers to the correct IP and that I have the correct ports open on the firewall. I am running Windows Server 2003. What can I do to try and get these zone transfers working? Thanks

    Read the article

  • I accidentally hid my Gnome Panel

    - by Dean
    I got a new tablet pc, and in attempt to hide the empathy mail icon, accidentally made the entire panel dissappear. (The one at the top that has applications/places/system, and battery life/network connections etc). I don't believe I killed it, because alt+f2, then gnome-panel does not bring it back. I've tried alt+ctrl+backspace to reset gnome, that didn't bring it back. It must just be some setting like 'hide' or something... Any tips to get it back? Thanks.

    Read the article

  • IE8 HTTPs Download Issue

    - by Jon Egerton
    I have a problem with a system I develop related to IE8 downloading over SSL (ie on sites using https://...) and is described on this MS kb article: http://support.microsoft.com/kb/323308 We use the HTTPCacheability.NoCache option as the data being downloaded is sensitive, and is downloaded from a secured site. I don't want that data to be cached on any of the proxies etc that the response passes through back to the client. The article describing the issue details a fix to the client side registry changing a BypassSSLNoCacheCheck setting. I don't want to loosen the system security just for IE8, as the system works fine on anything more upto date. Getting all the clients to apply the hotfix is difficult at best, and impossible at worst. We need to support IE8 in the system, at least for now. So: 1: Does the detailed hotfix have any implications for the security at the browser end in IE8 - does it mean the file will be cached? (in a place other than where the user saves the file). 2: Is there some way I can get these files downloadable with a change at the server end that doesn't break the security side of things?

    Read the article

  • a load balancing scenario using HAProxy and keepalived shows no performance advantage

    - by chakoshi
    Hi, I am trying to setup a load balanced web server scenario, using two HAproxy load balancers and two debian web servers following this guide http://www.howtoforge.com/setting-up-a-high-availability-load-balancer-with-haproxy-keepalived-on-debian-lenny. the setup is working but the results of simple performance benchmarking is not what I expected. I tried apache benchmark tool to send lots of requests to servers (one time directly testing one of the web servers and the other time testing through the load balancer) using the command "ab -n 1000000 -c 500 http://IP/index.html", but the test results shows better performance for the single server without load balancer. can any one tell me if I'm going wrong on some thing?

    Read the article

  • Web application design with distributed servers

    - by Bonn
    I want to build a web application/server with this structure: main-server sub-server transaction-server (create, update, delete) view-server (view, search) authentication-server documents-server reporting-server library-server e-learning-server The main-server acts as host server for sub-server. I can add many sub-servers and connect it to main-server (via plug-play interface maybe), then it can begin querying data from another sub-servers (which has been connected to the main-server). The sub-servers can be anywhere as long as connected to internet. The main-server can manage all sub-servers which are connected to it (query data, setting permission between sub-servers, etc). The purpose is simple, the web application will be huge as the company grows, so I want to distribute it into small connected plug-able servers. My question is, does the structure above already have a standardized method? or are there any different views? what are the technologies needed? I need a lot of researches before the execution plan begin. thanks a lot.

    Read the article

  • Separate tables or single table with queries?

    - by Joe
    I'm making an employee information database. I need to handle separated employees. Should I a. set up a query with a macro to send separated employees to a separate table, or b. just add a flag to the single table denoting separation? I understand that it's best practice to take choice b, and the one reason I can think of for this is that any structural changes I make to the table later will have to be done in both places. But it also seems like setting up a flag forces me to filter out that flag for basically every useful query I'm going to make in the future.

    Read the article

  • Strategy for hosting 700+ domains, each with static HTML site

    - by jonschlinkert
    I have a portfolio of more than 700 domain names, and ideally I'd like to put up a single-page HTML/CSS/JavaScript webpage for each domain. Is there a system/strategy/workflow that will allow me to: Automate the deployment of new websites, quickly and easily without having to manually initiate each new website in an admin panel. For instance, I've seen dropbox-based solutions that claim to make it simple to setup new websites on your dropbox account, but you still have to set each one up in an admin interface first. It would be so much easier to have a folder naming convention that allowed the user to easily clone/copy/duplicate sites inside their Dropbox App folder (https://www.dropbox.com/developers/blog/23) to create new ones. Sounds interesting, however... It's easy to managing CNAMEs on the registrar-side, is there a way to quickly associate CNAMEs with new websites, maybe gh-pages-style (https://help.github.com/articles/setting-up-a-custom-domain-with-pages)? With GitHub's gh-pages, all you have to do is drop a file called CNAME into your repo, with the domain name you want associated with the repo inside the file. gh-pages isn't a good solution for what I'm doing though unfortunately. I'm also a front-end developer, specializing in rapid web development and "front-end build systems", so I building and maintaining static assets for hundreds of sites is no problem. It's the hosting-side that I really struggle with. Any suggestions?

    Read the article

  • Sony Bravia (KDL-32EX402) audio connection fails

    - by Rasmus
    Hey. I'm having trouble connecting my computers audio to the TV (Sony Bravia KDL-32EX402). I'm using a standard AUX cable (some kind of adapter for L/R to go into the headphone plug on the computer). I'm connecting the other end to the back of the TV, it doesn't actually say "AUDIO IN" but it has to be (is also right below the "HDMI 1 AUDIO IN). When changing to "PC-mode" and setting the audio input to "PC" nothing happens. (but the pictures get's transmitted fine by VGA). I have checked that it's not the PC's headphone port, nor the AUX cable. What to do, what to do ?

    Read the article

  • How come transparency in textures appears as white in 3ds Max?

    - by rFactor
    I have downloaded a free tree palm model that came with textures and a preview image. In the preview image the tree looks fine, but when I have deployed the textures to my scene, the leaves look green plus white, where white is the transparency area. Is there something I need to know about transparent textures? Both in the view-port and in the renderer all transparency appears as white. What could it be? Edit: The model I was talking about is implemented with two JPGs. One is textured and the other one is black-white where white represents transparency and is applied to the material in the opacity channel/map. The transparency seems to work somewhat, but there are white borders around the leaves. I think it's because the opacity channel does not properly filter out all white colors for some reason. It also seems that changing the blur affects it, but setting it to 0 does not remove it though (and makes it jaggy).

    Read the article

  • ATI Radeon HD with Catalyst driver stuck mirroring screens

    - by Mike Axiak
    In 11.10 I replaced my aging Nvidia card with a new Radeon HD 6970 card. The single card has two DVI output ports which I've connected to two monitors. I installed Catalyst version 11.9 and I cannot get multiple monitors set up the way I want. I tried: $ sudo amdcccle and setting the mode to single desktop multiple monitors and whenever I do that Unity crashes and I get back to the login screen. Nothing shows up in the Xorg.*.log files for me to post here. There's only one card so I don't think xinerama would be any help here. Anyone have any ideas? EDIT: Here's my xorg.conf file: Section "ServerLayout" Identifier "aticonfig Layout" Screen 0 "aticonfig-Screen[0]-0" 0 0 EndSection Section "Module" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Monitor" Identifier "0-DFP3" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1280x1024" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Monitor" Identifier "0-CRT1" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1280x1024" Option "TargetRefresh" "75" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" Option "Monitor-DFP3" "0-DFP3" Option "Monitor-CRT1" "0-CRT1" BusID "PCI:5:0:0" EndSection Section "Device" Identifier "amdcccle-Device[5]-1" Driver "fglrx" Option "Monitor-DFP3" "0-DFP3" BusID "PCI:5:0:0" Screen 1 EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" DefaultDepth 24 SubSection "Display" EndSubSection EndSection Section "Screen" Identifier "amdcccle-Screen[5]-1" Device "amdcccle-Device[5]-1" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • Dell XPS 710 - Boot Device Not Available

    - by WilliamKF
    Recently I was gifted a Dell XPS 710 tower running WinXP SP3. It was running fine until today when I restarted and received a blue screen and the error "unmountable boot volume". In order to fix this issue I've attempted to boot from the Windows XP CD and run chkdsk. However when I attempt to boot from the CD drive I get a "Boot Device not Available" error. Next I made a bootable USB flash drive, but receive the same error as the CD drive. The BIOS recognizes the CD and flash drives, and they appear in the boot device list, but neither will boot. I've attempted to resolve this by re-arranging the boot order in the BIOS and switching IDE channels/setting jumpers on the drives themselves, to no effect. I cannot move the hard drive to another machine because the XPS is the only one that supports SATA drives.

    Read the article

  • Getting Synergy+ to work with Windows 7 64bit and Windows XP 32bit?

    - by john crisp
    My Windows XP box is Dell 4550 dimension. Windows 7 box is Home Premium. Synergy + installs fine on the Windows 7box, but using Windows xp as the client, it won't work? Do I need to disconnect the vga connection from my monitor and control entirely from the win7 dvi connection? In setting up ( links ) on Windows 7 as the server, I have NO way of inputing info? The setup on the Windows xp box is simple and does so OK, but I can't connect to the Windows xp box! It must be the vga connection on the Windows XP box. In order to use both machines, I have to turn one off and the other on. Hence-the VGA is needed on Windows XP. Any info/help most appreciated.

    Read the article

  • Set Thunderbird to always reply using plain text [closed]

    - by stefan.at.wpf
    Possible Duplicate: How do I tell Thunderbird never to send (or even try to send) html emails? In the account settings of Thunderbird (version 11.0.1) I have disabled HTML and set it to compose messages as plain text. That works for new messages. However, when I get an HTML email and reply to that mail, Thunderbird uses HTML. I went to the setting in: Tools ? Options ? Composition ? Send Options ? Plain Text Domains And have tried *.* as the domain name. I also changed the default text format in Settings - Compose - Send options. Neither of them works, the reply is still using HTML (for my own text). How can I really reply in plain text only, regardless of incoming?

    Read the article

  • Working with documents and SharePoint - Best practices

    - by KunaalKapoor
    Follow these simple guidelines to make collaboration using SharePoint easier:1. File Name:While it is allowed to use spaces in your filename (and maybe it seems even logical to do so), don’t use them if your file will end up (or is born on) SharePoint. When you use the “download a copy” functionality, SharePoint will replace the spaces with an “_”. This might (will) result in inconsistency when you upload the “same” file again, since SharePoint will see this as a different file (since the filename is different). I recommend using a filename with Capitalization style naming guideline. For instance: the document “Overall governance model.docx” would be named “OverallGovernanceModel.docx”Use the TITLE field in the office applications to give your document a title (and subtitle and keywords, .) The title column can be used in a view in a library. You can get to the document properties by clicking on 'Office Button/Prepare/Properties'. (Office 2007). This is metadata that is stored with the document, and will remain in the document (even if you exchange this document via e-mail, via an external hard drive). The filename cannot be longer than 128 characters. (and that is IMHO far beyond reasonable) You cannot use any of these characters: ” # % & * : < > ? \ / { | } ~ 2. Versioning:SharePoint has a built-in versioning system. You can work with major (published) versions, and minor (draft) versions. Of each of these two document types, you can store a numbers of versions that are kept. Watch out, each version is saved, not only the delta between 2 versions, and this counts to your Site Collection Quota. (Example: you have a Word document with a size of 2 MB. When you keep 5 Drafts this will result in storing (and consuming) 10 MB.So, don’t call your document “NewUserAccountProcessDRAFTv1.docx”, but “NewUserAccountProcess.docx” and use versioning setting in your library.You can enable views on your library to display the version number.You can enable the version number to be displayed in a Word document.3. Use MetadataUse metadata to assign other properties to documents, so it can be easily identified, sorted- or grouped by.

    Read the article

  • Media Drive Permissions

    - by Wade Wofford
    I just switched from a Hackintosh to Linux, and am trying to make sense of it. On my hackintosh, I partitioned a big drive into 3 parts--1 which holds music, 1 for film/tv, and one for the OS. I installed Ubuntu onto the OS partition, and am now trying to make it so I can write to the media drives. I've searched around and tried several things. I tried gksu nautilus in Terminal, which brought me into root permissions. When I select a folder and try to change permissions, I get "The owner could not be changed...Error setting owner: Read-only file system" Ultimately, I have two specific aims: - I want to make it so I can write to the film/tv drive from the ubuntu machine only - I want to make it so I can write to the music drive from the ubuntu machine, or any other machine on the network (all Macs). That is, I want a single music library (an iTunes file) that will serve all Mac laptops/iPads/iPhones on the network, but which XBMC on the Ubuntu machine can also see / read from. Music will be added to the iTunes library via a single Mac laptop, but all other devices should be able to see the music drive.

    Read the article

  • How to check use of userva boot option on Win 2K3 server

    - by Tim Sylvester
    I have some 32-bit Win2K3 servers running an application that fails now and then apparently due to heap fragmentation. (Process virtual bytes grows, private bytes does not) I do not have access to the source code or build process of this application. I have modified the boot.ini file on one of these servers to include /userva=2560, half way between the normal mode of operation and the /3GB option. Normally it takes weeks to reach the point of failure, but I'd like to see right away whether this has actually had any effect. As I understand it, this option limits the kernel to the remaining address space (1536MB instead of 2048), but does not necessarily give an application the extra address space, depending on the flags in the application's PE header. How can I determine whether the O/S is allowing a particular application, running in production, to access address space above 2GB? Additionally, what's the best way to monitor the system to ensure that the kernel is not starved for address space, and more generally how should I go about finding the optimal value for this setting?

    Read the article

  • Unable to connect guest using VMWare Player

    - by eLAN
    I'm running RedHat server 5.3 as guest on Window XP VMware palyer. the network setting is set to "Host Only", but I have tries all other settings. I'm able to ping the guest machine, but I'm unable to connect it in any other way including webserver, Tomcat, Telnet, ssh. all of the services above are working from within the guest (using localhost). Guest firewall and SELinux are disabled. any idea on what I should check next? every idea will be appreciated... thnaks Ilan

    Read the article

< Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >