Search Results

Search found 9051 results on 363 pages for 'apply templates'.

Page 228/363 | < Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >

  • Give access to specific services on Windows 7 Professional machines?

    - by Chad Cook
    We have some machines running Windows 7 Professional at our office. The typical user needs to have access to stop and start a service for a local program they run. These machines have a local web server and database installed and we need to restrict access to certain folders and services related to the web server and database for these users. The setup I have tried so far is to add the typical user as a Power User. I have been able to successfully restrict them from accessing certain folders (as far as I can tell) but now they do not have access to the service needed for starting and stopping the local program. My thought was to give them access to the specific service but I have not had any luck yet. In searching the web for solutions the only results I have found relate to Windows Server 2000 and 2003 and involve creating security templates and databases through the Microsoft Management Console. I am hesitant to try an approach like this as these articles are typically older and I worry this method is outdated. Is there a better way to accomplish the end goal of giving the user permission to run the service and restrict their access to certain folders? If any clarification is needed on the setup or what we are trying to achieve, please let me know. Thanks in advance.

    Read the article

  • Group Policy for DNS Server Addresses

    - by John
    This question deals strictly with the methodology of using Active Directory to publish DNS settings and controlling the DNS server address order to domain attached clients. This is not asking about if this is the right function, role, or best use of group policy; but rather, how to make it work. I would like to be able to publish in group policy DNS addresses and possibly control the order in which the client consumes the addresses. I have tried following the information from this site without success. I set the following setting within group policy, but the client never shows the settings within the TCP/IP properties. Computer Configuration/Administrative Templates/Network/DNS Client/DNS Servers I did list them as a single space separated list. For example, the following: 192.168.0.1 192.168.10.3 192.168.34.2 192.168.2.67 192.168.56.99 192.168.99.23 This would be for Windows 7, Windows Server 2003, and Windows Server 2008 clients. I am not sure what I am doing wrong or how to get this to work. Am I missing a setting? Do I need to set something differently?

    Read the article

  • How to upgrade your Solaris 11 with the latest available SRUs ?

    - by jim
    1 ) Follow the instructions (*) in the document "Updating the Software on Your Oracle Solaris 11 System" * see and apply "How to Configure the Oracle Solaris support Repository" then run: # pkg update --accept Reboot if necessary (see "why" in STEP 3) Note: it is possible that your "pkg" package is not up to date # pkg update WARNING: pkg(5) appears to be out of date, and should be updated before running update.  Please update pkg(5) using 'pfexec pkg install pkg:/package/pkg' and then retry the update. -> So just upgrade it: # pkg install pkg:/package/pkg Reboot if necessary (see "why" in STEP 3) By the way, note that I am using the DEV tree: # pkg publisher -PPUBLISHER                             TYPE     STATUS   URIsolaris                               origin   online   https://pkg.oracle.com/solaris/dev/ 2 ) Now our system is ready to jump to the latest SRUs so check what is available: # pkg list -af entire NAME (PUBLISHER)                     VERSION                    IFO entire                                            0.5-DOT-11-0.175.1.0.0.24-DOT-2    ---   --> Latest version available in the repository ! entire                                            0.5.11-0.175.1.0.0.24.0    --- entire                                            0.5.11-0.175.1.0.0.23.0    --- entire                                            0.5.11-0.175.1.0.0.22.1    --- entire                                            0.5.11-0.175.1.0.0.21.0    --- entire                                            0.5.11-0.175.1.0.0.20.0    --- entire                                            0.5.11-0.175.1.0.0.19.0    --- entire                                            0.5.11-0.175.1.0.0.18.0    --- entire                                            0.5.11-0.175.1.0.0.17.0    --- entire                                            0.5.11-0.175.1.0.0.16.0    --- entire                                            0.5.11-0.175.1.0.0.15.1    --- entire                                            0.5.11-0.175.1.0.0.14.0    --- entire                                            0.5.11-0.175.1.0.0.13.0    --- entire                                            0.5.11-0.175.1.0.0.12.0    --- entire                                            0.5.11-0.175.1.0.0.11.0    --- entire                                            0.5.11-0.175.1.0.0.10.0    --- entire                                            0.5.11-0.175.0.11.0.4.1    i--   --> this is at what level your OS is ! 3 ) Apply the latest SRU: (don´t forget the --accept parameter) # pkg update --accept entire-AT-0.5.11-0.175.1.0.0.24.2------------------------------------------------------------Package: pkg://solaris/consolidation/osnet/osnet-incorporation-AT-0.5.11,5.11-0.175.1.0.0.24.2:20120919T184141ZLicense: usr/src/pkg/license_files/lic_OTNOracle Technology Network Developer License AgreementOracle Solaris, Oracle Solaris Cluster and Oracle Solaris ExpressEXPORT CONTROLSSelecting the "Accept License Agreement" button is a confirmationof your agreement that you comply, now and during the trial term(if applicable), with each of the following statements:-You are not a citizen, national, or resident of, and are not undercontrol of, the government of Cuba, Iran, Sudan, North Korea, Syria,or any country to which the United States has prohibited export. <....> Packages to remove:  10 Packages to install:  38 Packages to update: 443 Mediators to change:   2 Create boot environment: Yes   --> a reboot is required and a new BE will be created for you !! Create backup boot environment:  No DOWNLOAD                               PKGS       FILES    XFER (MB)Completed                                491/491 23113/23113  504.2/504.2PHASE                                        ACTIONSRemoval Phase                           10359/10359 Install Phase                               15530/15530 Update Phase                              15543/15543 PHASE                                          ITEMSPackage State Update Phase         932/932 Package Cache Update Phase       452/452 Image State Update Phase                    2/2 A clone of solaris-1 exists and has been updated and activated.On the next boot the Boot Environment solaris-2 will bemounted on '/'.  Reboot when ready to switch to this updated BE. 4 ) Reboot your system # reboot  5 ) Resources Solaris 11 Express and Solaris 11 Support Repositories Explained [ID 1021281.1] Oracle Solaris 11 Release Notes Updating the Software on Your Oracle Solaris 11 System

    Read the article

  • Is there a standard method for assigning nameservers to servers in a Windows domain?

    - by HopelessN00b
    As the title asks, I'm wondering if there's any standard or "best practice" for how to actually assign nameservers (DNS) and manage the nameserver configuration for client servers on a Windows domain. I'm talking about the setting circled in the below image, in case the language of the question is not clear enough: This is for a large, multi-site environment, where ideally/hopefully all servers point at their site's Domain Controller as the primary DNS server, and a DNS server at a different site as the secondary DNS server. For simplicity's sake, we can say that the secondary server would be the Domain Controller at the home site for everyone, and there are no tertiary DNS servers (even though that's not actually the case). Try as I might, I can't seem to find a GPO setting for this (at least on FL 2003 R2, the Computer Configuration - Administrative Templates - Network - DNS Client - DNS Servers GPO is Supported on: Windows XP Professional only), and I find it rather hard to believe that the "best"/"standard" solution would therefore be either scripting up something to apply the DNS settings per site, or using a DHCP server to push those configurations out to the other servers via the DHCP Scope Options. So, is there a standard way of managing this configuration? (That's hopefully not "a script" or "DHCP Scope Option.)

    Read the article

  • How can I make grub2 boot into Windows 7?

    - by Grzenio
    I had Windows 7 installed on my system, then I installed Debian testing with grub2 as its boot manager. Initially I couldn't see windows entry in grub at all, so I ran: aptitude install os-prober kcpuload update-grub Now I can see the entry, but when I select it I get only Win7 system restore, instead of the the real thing. Any ides how to make it work? EDIT: I tried the suggested approach to add a new file to /etc/grub.d, which generated an entry in grub.cfg, but it does not appear in the grub menu on boot :( I have this: grzes:/home/ga# cat /etc/grub.d/11_Windows #! /bin/sh -e echo Adding Windows >&2 cat << EOF menuentry “Windows 7? { set root=(hd0,2) chainloader +1 } And I have the following grub.cfg file: grzes:/home/ga# cat /boot/grub/grub.cfg # # DO NOT EDIT THIS FILE # # It is automatically generated by /usr/sbin/grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then load_env fi set default="0" if [ ${prev_saved_entry} ]; then set saved_entry=${prev_saved_entry} save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z ${boot_once} ]; then saved_entry=${chosen} save_env saved_entry fi } insmod ext2 set root=(hd0,3) search --no-floppy --fs-uuid --set 6ce3ff31-0ef7-41df-a6f5-b6b886db3a94 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 insmod gfxterm insmod vbe if terminal_output gfxterm ; then true ; else # For backward compatibility with versions of terminal.mod that don't # understand terminal_output terminal gfxterm fi fi set locale_dir=/boot/grub/locale set lang=en insmod gettext set timeout=5 ### END /etc/grub.d/00_header ###

    Read the article

  • Router that allows custom Dynamic DNS server [closed]

    - by Thuy
    I've made my own DDNS service and it works fine using an application running on clients to update the IP. But if for some reason I don't have the choice of using my software and instead I need to use a router to update the IP, it becomes troublesome. For example, I needed to setup IPsec from a customer to me and the customers router/firewall (netgear srx5308) has a dynamic IP which is given from the ISP which can't offer static IPs. So it needs to use dynamic dns for it to work. In this case there really isn't a client to run the software on since it's a router/firewall. Unfortunately it seems that most routers are rather unfriendly towards custom DDNS solutions and most offer only dyndns.com or similar templates. Which was the case with this router too. Leaving me with no way to use my own dynamic dns server IP. I have the option of switching out the customers router and I've been looking around for alternatives and other routers/solutions and I was wondering if anyone on this great site might have been in a similar situation or might just know about some router/firewall that is more friendly towards custom ddns solutions that I might be able to use. Thanks in advance for any help or guidance!

    Read the article

  • How to start MSSQL Server with corrupt model db

    - by Jordan McGuigan
    After moving some databases around (restoring, deleting, etc) we experienced an issue creating new databases. Specifically, When trying to create a new database MSSQL Server it failed because the "The database 'model' is marked RESTORING and is in a state that does not allow recovery to be run". As some online solutions suggested, we tried to Start and Stop the MSSQL Service. Service would not restart because "Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive" (FYI: the drive has 100gb of free space). Tried restarting the machine the MSSQL Server is running on. When the server came back online, we received the same error. We have tried deleting tempdb.mdf and restoring the modeldb from the templates folder, but neither of these solved the issue. We are unable to connect to the database, even in single user mode. Many of the online solutions have us running SQL commands against the server, but we are unable to connect (even in single user mode) to the DB to run commands against the server. Specific error messages: Database 'model' cannot be opened. It is in the middle of a restore. (Microsoft SQL Server, Error: 927) The SQL Server (MSSQLSERVER) service is starting. The SQL Server (MSSQLSERVER) service could not be started. A service specific error occurred: 1814. We need the server up and running again ASAP.

    Read the article

  • powershell vs GPO for installation, configuration, maintenance

    - by user52874
    My question is about using powershell scripts to install, configure, update and maintain Windows 7 Pro/Ent workstations in a 2008R2 domain, versus using GPO/ADMX/msi. Here's the situation: Because of a comedy of cumulative corporate bumpfuggery we suddenly found ourselves having to design, configure and deploy a full Windows Server 2008R2 and Windows 7 Pro/Enterprise on very short notice and delivery schedule. Of course, I'm not a windows expert by any means, and we're so understaffed that our buzzword bingo includes 'automate' and 'one-button' and 'it needs to Just Work'. (FWIW, I started with DEC, then on to solaris and cisco, then linux of various flavors with a smattering of BSD nowadays. I use Windows for email and to fill out forms). So we decided to bring in a contractor to do this for us. and they met the deadline. The system is up and mostly usable, and this is good. We would not have been able to do this. But it's the 'mostly' part that is proving to be the PIMA now, and I'm having to learn Microsoft stuff anyway until/if we can get a new contract with these guys for ongoing operations. Here's my question. The contractor used powershell almost exclusively for deployment, configuration and updating. My intensive reading over the last week leads me to think that the generally accepted practices for deployment, configuration and updating microsoft stuff uses elements of GPOs and ADMX templates, along with maybe some third party stuff like PolicyPak. Are there solid reasons that I've not found yet that powershell scripts would be preferred over the GPO methods? I'm going to discuss this with the contractor lead when he gets back from his vacation, and he'll be straight with me (nor do I think they set us up). But I can also see this might be a religious issue, so I would still like some background on this. Thoughts? or weblinks? Thanks!

    Read the article

  • Create new vms with a template with a csv. Possible?

    - by EdConde
    I am new to Powershell and Powercli... but i manager few ESX environments and really would like to do as much as possible via powershell. I am trying to do as much as i can via Powershell. On with the help I need: I used this one liner to create VMs from templates. But the problem is there has to be some user input after each new VM is created. New-VM name -Template template -VMHost VMHost -Datastore Datastore What i would like to do is be able to import via CSV the name of the new vm, the template to use, the host to put the new vm and the datastore all from a CSV. I don't know if it is as easy as below, but i kept getting errors. Import-Csv "C:\powershell\Data\VM2Create.csv" | Foreach-object{ New-VM $.name -Template $.template -VMHost $.VMHost -Datastore $.Datastore} I know there some () or {} or possibly | that need... just don't know where to put them... The csv i think would look like this: name, template, vmhost, datastore Any help or thoughts would be much appreciated...

    Read the article

  • Varnish going sick

    - by junke1990
    I'm having trouble with Varnish, it works for a couple of views and then just goes sick... The weird thing is that it does work for about 20 or 30 requests. If I call apache directly it works fine. I'm running Varnish Version: 3.0.3-1 on Debian Squeeze and, for now, Apache on port 80 and Varnish on port 8080 on the same server.. I'm using https://github.com/mattiasgeniar/varnish-3.0-configuration-templates as base for my VCLs and modified the VCLs to support Concrete5. Anyone any clue on how I should debug this? backend default { .host = "127.0.0.1"; .port = "80"; .connect_timeout = 1.5s; .first_byte_timeout = 45s; .between_bytes_timeout = 30s; .probe = { .url = "/"; .timeout = 1s; .interval = 10s; .window = 10; .threshold = 8; } } LOG 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1353791312 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1353791315 1.0 0 Backend_health - default Still sick 4--X-R- 0 8 10 0.000689 0.000000 HTTP/1.1 301 Moved Permanently (the 301 is because I check for www.)

    Read the article

  • Scaling a node.js application, nginx as a base server, but varnish or redis for caching?

    - by AntelopeSalad
    I'm not close to being well versed in using nginx or varnish but this is my setup at the moment. I have a node.js server running which is serving either json, html templates, or socket.io events. Then I have nginx running in front of node which is serving all static content (css, js, etc.). At this point I would like to cache both static content and dynamic content to memory. It's to my understanding that varnish can cache static content quite well and it wouldn't require touching my application code. I also think it's capable of caching dynamic content too but there cannot be any cookie headers? I do use redis at the moment for holding session data and planned to use it for other things in the future like keeping track of non-crucial but fun stats. I just have no idea how I should handle caching everything on the site. I think it comes down to these options but there might be more: Throw varnish in front of nginx and let varnish cache static pages, no app code changes. Redis would cache dynamic db calls which would require modifying my app code. Ignore using varnish completely and let redis handle caching everything, then use one of the nginx-redis modules. I'm not sure if this would require a lot of app code changes (for the static files). I'm not having any luck finding benchmarks that compare nginx+varnish vs nginx+redis and I'm too inexperienced to bench it myself (high chances of my configs being awful). I'm basically looking for the solution that would be the most efficient in terms of req/sec and scalable in the future (throw new hardware at the problem + maybe adjust some values in a config = new servers up and running semi-painlessly).

    Read the article

  • Why is /dev/urandom only readable by root since Ubuntu 12.04 and how can I "fix" it?

    - by Joe Hopfgartner
    I used to work with Ubuntu 10.04 templates on a lot of servers. Since changing to 12.04 I have problems that I've now isolated. The /dev/urandom device is only accessible to root. This caused SSL engines, at least in PHP, for example file_get_contents(https://... to fail. It also broke redmine. After a chmod 644 it works fine, but that doesnt stay upon reboot. So my question. why is this? I see no security risk because... i mean.. wanna steal some random data? How can I "fix" it? The servers are isolated and used by only one application, thats why I use openvz. I think about something like a runlevel script or so... but how do I do it efficiently? Maby with dpkg or apt? The same goes vor /dev/shm. in this case i totally understand why its not accessible, but I assume I can "fix" it the same way to fix /dev/urandom

    Read the article

  • VMware ESX Linux Guest Customization

    - by andyh_ky
    Hello, I am interested in deploying several RHEL 4 Update 8 virtual machines for creation of a test environment. Here are the steps I am taking: In off hours, P2V/V2V the production machines and convert them to templates Deploy the virtual machines with a customization specification that changes hostname, IP address I am interested in how these processes are done and if there are any options for further customization. Are the machines brought on the network when they are powered on, before they are reconfigured? Is there a potential IP address conflict? Is there an option to run additional scripts which reside on the guest as a part of the reconfiguration? For example, restoring an Oracle Database. This is an option with Windows guests and sysprep, but I have been unable to locate anything showing a RHEL equivalent. I am dealing with a multi tier application. The main issue I am attempting to mitigate is that the application servers reference database servers by hostname and in tnsnames files. I am interested in scripting the reconfiguration of the application in the deployment so that the app/db servers are pointing to the test environment. I am OK with placing the 'cleanup' script on the source and executing it after the machine has been brought up. I am interested in the automation of the script's execution post clone/boot, as well as if there could be an IP address conflict. (cross posted to VMTN's ESX 4 community)

    Read the article

  • How to change Fluent NHibernate reference column name on a HasMany relationship using IHasManyConven

    - by snicker
    Currently I'm using Fluent NHibernate to generate my database schema, but I want the entities in a HasMany relationship to point to a different column for the reference. IE, this is what NHibernate will generate in the creation DDL: alter table `Pony` add index (Stable_ID), add constraint Ponies_Stable foreign key (Stable_Id) references `Stable` (Id); This is what I want to have: alter table `Pony` add index (Stable_ID), add constraint Ponies_Stable foreign key (Stable_Id) references `Stable` (EntityId); Where Stable.ID would be the primary key and Stable.EntityId is just another column that I set. I have a class already that looks like this: public class ForeignKeyReferenceConvention : IHasManyConvention { public void Apply(IOneToManyCollectionInstance instance) { instance.Cascade.All(); //What goes here so that I can change the reference column? } } What do I have to do to get the reference column to change?

    Read the article

  • Drag and Drop in MVVM with ScatterView

    - by Rich McGuire
    I'm trying to implement drag and drop functionality in a Surface Application that is built using the MVVM pattern. I'm struggling to come up with a means to implement this while adhering to the MVVM pattern. Though I'm trying to do this within a Surface Application I think the solution is general enough to apply to WPF as well. I'm trying to produce the following functionality: User contacts a FrameworkElement within a ScatterViewItem to begin a drag operation (a specific part of the ScatterViewItem initiates the drag/drop functionality) When the drag operation begins a copy of that ScatterViewItem is created and imposed upon the original ScatterViewItem, the copy is what the user will drag and ultimately drop The user can drop the item onto another ScatterViewItem (placed in a separate ScatterView) The overall interaction is quite similar to the ShoppingCart application provided in the Surface SDK, except that the source objects are contained within a ScatterView rather than a ListBox. I'm unsure how to proceeded in order to enable the proper communication between my ViewModels in order to provide this functionality. The main issue I've encountered is replicating the ScatterViewItem when the user contacts the FrameworkElement.

    Read the article

  • htmltext of TextArea in Flex 3 disappears when embedding fonts!

    - by Ali Syed
    hello, I have textArea which gets the text through user input in runtime. User input comes through Richtexteditor so it is html I save the html text from Richtexteditor to textArea's htmltext property. everything seems to be fine! till I try to embed fonts!! (I need to embed fonts because I apply a fade effect to the TextArea.) With embedded fonts the text simply disappears! could you help me out here please! i am really desperate! Ali M Syed @font-face { src:local("Arial"); fontFamily: ArialEmbedded; } . . . body.setStyle("fontFamily", "ArialEmbedded"); body is TextArea

    Read the article

  • XmlSerializer - There was an error reflecting type

    - by oo
    Using C# .NET 2.0, I have a composite data class that does have the [Serializable] attribute on it. I am creating an XMLSerializer class and passing that into the constructor: XmlSerializer serializer = new XmlSerializer(typeof(DataClass)); I am getting an exception saying: There was an error reflecting type. Inside the data class there is another composite object. Does this also need to have the [Serializable] attribute or by having it on the top object does it recursively apply it to all objects inside? Any thoughts?

    Read the article

  • Placemark name from KML to OpenLayers Label.

    - by Ozaki
    TLDR I have a KML layer in OpenLayers. It pulls out the geometries images placemarks etc. But will not apply a label to any of my placemarks etc: I am stumped by this one my StyleMap is as follows: var styleMap = new OpenLayers.StyleMap({ fillOpacity: 1, pointRadius: 10, label: "${name}", //I believe this should pull the name attr from the KML fontColor: "#7E3C1C", fontSize: "13px", fontFamily: "Courier New, monospace", fontWeight: "strong", labelXOffset: "0", labelYOffset: "-15" }); and the layer: var mykmllayer= new OpenLayers.Layer.Vector("KML Layer", { styleMap: styleMap, projection: new OpenLayers.Projection("EPSG:4326"), strategies: [new OpenLayers.Strategy.Fixed()], protocol: new OpenLayers.Protocol.HTTP({ url: URLTOKML, format: new OpenLayers.Format.KML({ styleMap: myStyles, extractStyles: true, extractAttributes: true }) }) }); (Insert shameless bump here)bump Does anyone know maybe what I am missing or a reason why this wouldn't work?

    Read the article

  • soap vs REST vs JSON in SOA

    - by Muhammad Adnan
    I writing here to clear my and may be many people 's misconceptions about them... first my question is: SOAP is xml based protocol REST is web based architectural web service JSON is standard but not xml based how can we compare them???? as trio are different things 2nd question is: is REST response xml based only or json based also??? if it is also xml based then how can we consider it different then SOAP and even faster... 3rd question is: how can we apply authentication header on REST and jSON based web services (any reference with description) 4th question is: what is SOA and if some application contains some web services, can we consider it SOA based means what are SOA specs... your response would be appreciated :)

    Read the article

  • Consuming WebSphere from WCF client: Unable to create AxisService from ServiceEndpointAddress

    - by JohnIdol
    I am consuming (or trying to consume) a WebSphere service from a WCF client (service reference + bindings generated through svcutil). Connection seems to be established successfully but I am getting the following error: CWWSS7200E: Unable to create AxisService from ServiceEndpointAddress [address] Rings any bell? I am guessing the request format is somehow being rejected by the service, I am sniffing it with fiddler and it looks fine overall (can post if ppl think it could help). Found this article, but it doesn't seem to apply to my case. Any help appreciated!

    Read the article

  • Linq where clause with multiple conditions and null check

    - by SocialAddict
    I'm trying to check if a date is not null in linq and if it isn't check its a past date. QuestionnaireRepo.FindAll(q => !q.ExpiredDate.HasValue || q.ExpiredDate >DateTime.Now).OrderByDescending(order => order.CreatedDate); I need the second check to only apply if the first is true. I am using a single repository pattern and FindAll accepted a where clause ANy ideas? There are lots of similar questions on here but not that give the answer, I'm very new to Linq as you may of guessed :) Edit: I get the results I require now but it will be checking the conditional on null values in some cases. Is this not a bad thing?

    Read the article

  • Hierarchical templating multiple object types in silverlight

    - by Dan Wray
    Is it possible and if so what is the best way to implement the following hierarchical structure in a silverlight (4) TreeView control? (where Item and Group are classes which can exist in the tree). Group | |-Item | |-Group | | | |-Item | | | |-Item | |-Item The structure could of course be arbitrarily more complex than this, if needed. HierarchicalDataTemplates appear to be the way to approach this, but I'm specifically having trouble understanding how I might apply the template to interpret the different classes correctly. A similar question was asked for WPF, the answer for which made use of the TargetType property on the HierarchicalDataTemplate, but I am uncertain whether that property is available in the silverlight version since I don't appear to have access to it in my environment.

    Read the article

  • LINQ To SQL exception: Local sequence cannot be used in LINQ to SQL implementation of query operator

    - by pcampbell
    Consider this LINQ To SQL query. It's intention is to take a string[] of search terms and apply the terms to a bunch of different fields on the SQL table: string[] searchTerms = new string[] {"hello","world","foo"}; List<Cust> = db.Custs.Where(c => searchTerms.Any(st => st.Equals(c.Email)) || searchTerms.Any(st => st.Equals(c.FirstName)) || searchTerms.Any(st => st.Equals(c.LastName)) || searchTerms.Any(st => st.Equals(c.City)) || searchTerms.Any(st => st.Equals(c.Postal)) || searchTerms.Any(st => st.Equals(c.Phone)) || searchTerms.Any(st => c.AddressLine1.Contains(st)) ) .ToList(); An exception is raised: Local sequence cannot be used in LINQ to SQL implementation of query operators except the Contains() operator Question: Why is this exception raised, and how can the query be rewritten to avoid this exception?

    Read the article

  • Consuming WebSphere service from WCF client: Unable to create AxisService from ServiceEndpointAddres

    - by JohnIdol
    I am consuming (or trying to consume) a WebSphere service from a WCF client (service reference + bindings generated through svcutil). Connection seems to be established successfully but I am getting the following error: CWWSS7200E: Unable to create AxisService from ServiceEndpointAddress [address] Rings any bell? I am guessing the request format is somehow being rejected by the service, I am sniffing it with fiddler and it looks fine overall (can post if ppl think it could help). Found this article, but it doesn't seem to apply to my case. Any help appreciated!

    Read the article

  • SUBSONIC 3.0.0.3 Subsonic.Query.SqlQuery

    - by dancingn27
    New to subsonic and having issues figuring it out. I am simply just trying to do a distinct search and any documentation I find is telling me to use the class/method SubSonic.SqlQuery Though I am finding out that since I am using the newest version, a lot of the documentation I am finding does not apply. For example, I am getting this query working beautifully using Subsonic.Query.SqlQuery though there is NO distinct method hanging off of it as suggested by what I have seen. Please advice! SubSonic.Query.SqlQuery query = brickDB.SelectColumns(new string[] { "DomainName" }).From<Web.Data.DB.WebLog>() .Where(Web.Data.DB.WebLogTable.DomainNameColumn).IsNotNull(); -> No distinct hanging off of From<>()....

    Read the article

< Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >