Search Results

Search found 19625 results on 785 pages for 'local groups'.

Page 285/785 | < Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >

  • Debugging UI Problems in IE8 (Was IE8 on Windows 7 Authentication Mess)

    - by alharaka
    UPDATE: I think the real question I need to ask here is: how does a technician debug UI problems with Internet Explorer, and not HTML rendering issues that have pretty good tools? I am aware of the SysInternals tools and others mentioned below, but maybe I am not harnessing their power properly. Someone else in the TechNet forum I mentioned had a similar issue. Again, I have lots of data, I am not sure how to properly interpret it. ORIGINAL POST: So I tried the venerable Technet Forums to solve this isse. In short, the Windows Security dialog has no place to put credentials, rendering pretty much useless. This happens to apply for a whole bunch of our intranet websites, and only a select number of users with a few laptops have this problem. It ends up looking like this. Things I have tried so far: Disabling local Group Policy (not domain connected) Disabling local Security Policy Resetting IE settings A few system restores Re-registering a bunch of IE DLL's and all other steps here Reinstalling IE8 (dism /online /disable-feature /featurename:"internet-explorer-optional-x86, reboot, dism /online /enable-feature /featurename:"internet-explorer-optional-x86, and reboot) And SFC scan, which found nothing Still, nothing. Not only am I fed up, but I have begun to really work with APIExplorer and Procmon as mentioned in the Technet original because I want to know WHAT is happening, not just fix it. Any thoughts?

    Read the article

  • Getting at fsid under Linux? Or an alternate way of identifying filesystems?

    - by larsks
    In an environment with automounted home directories, such that the same filesystem exported by a fileserver may be mounted multiple times on the client, I would like to authoritatively be able to identify whether two mountpoints are in fact the same filesystem. That is, if the remote server exports: /home And the local client has: # mount fileserver:/home/l/lars on /home/lars type nfs (rw...) fileserver:/home/b/bob on /home/bob type nfs (rw...) I am looking for a way to identify that both /home/lars and /home/bob are in fact the same filesystem. In theory this is what the fsid result of the statvfs structure is for, but in all cases, for both local and remote filesystems, I am finding that the value of this structure member is 0. Is this some sort of client-side issue? Or do most modern NFS servers simply decline to provide a useful fsid? The end goal of all of this is to robustly interpret the output from the quota command for NFS filesystems. For example, given the example above, running quota as myself may return something like: Disk quotas for user lars (uid 6580): Filesystem blocks quota limit grace files quota limit grace otherserver:/vol/home0/a/alice 12 52428800 52428800 4 4294967295 4294967295 fileserver:/home/l/lars 9353032 9728000 10240000 124018 0 0 ...the problem here being that there exists a quota for me on otherserver which is visible in the results of the quota command, even though my home directory is actually on a different device. My plan was to look up the fsid for each mountpoint listed in the quota output and check to see if it matched the fsid associated with my home directory. It looks like this won't work, so...any suggestions?

    Read the article

  • Oracle 10g for Windows does not start up on system boot

    - by Mike Dimmick
    We have an Oracle 10g Enterprise Edition installation (10.2.0.1.0) on a Windows Server 2003 virtual machine. It was initially created with Virtual Server 2005 R2 SP1 but has now been migrated to Windows Server 2008 Hyper-V. The services start on system boot, but the instance does not start up. This problem was actually occurring on Virtual Server after a migration from one server to another, but I managed to fix it then with: oradim -edit -sid ORCL -startmode auto However, this now has no effect. oradim.log (in %OracleHome%\database\oradim.log) says: Thu Jun 10 14:14:48 2010 C:\oracle\product\10.2.0\db_3\bin\oradim.exe -startup -sid orcl -usrpwd * -log oradim.log -nocheck 0 Thu Jun 10 14:14:48 2010 ORA-12560: TNS:protocol adapter error sqlnet.log in the same folder has: Fatal NI connect error 12560, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=BEQ)(PROGRAM=oracle)(ARGV0=oracleorcl)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))'))(CONNECT_DATA=(SID=orcl)(CID=(PROGRAM=C:\oracle\product\10.2.0\db_3\bin\oradim.exe)(HOST=ORACLE-VM)(USER=SYSTEM)))) VERSION INFORMATION: TNS for 32-bit Windows: Version 10.2.0.1.0 - Production Oracle Bequeath NT Protocol Adapter for 32-bit Windows: Version 10.2.0.1.0 - Production Time: 10-JUN-2010 14:14:48 Tracing not turned on. Tns error struct: ns main err code: 12560 TNS-12560: TNS:protocol adapter error ns secondary err code: 0 nt main err code: 530 TNS-00530: Protocol adapter error nt secondary err code: 2 nt OS err code: 0 The ORA_ORCL_AUTOSTART registry value is set to TRUE, so it should be auto-starting - and you can see that it's trying to. The problem also occurs when stopping and restarting the OracleServiceORCL service. I've enabled SQL*Net tracing which shows: [10-JUN-2010 15:09:33.919] snlpcss: entry [10-JUN-2010 15:09:34.419] snlpcss: Unable to spawn Oracle oracle (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) orcl, error 2. [10-JUN-2010 15:09:34.419] snlpcall: exit On a hunch that error 2 is Windows error 2 (file not found) I tried restarting the service with Process Monitor watching oradim.exe, but this appears to delay things just enough that it always works. Right now I have a horrible hack where I've created a Scheduled Task to run oradim -startup -sid ORCL when the Administrator account logs on, and set the VM to auto-logon. I'd still like to work out why it's not working.

    Read the article

  • Migrating from "partial" Exchange 2003 to full Exchange 2003 usability

    - by TheCleaner
    I have a client that is using Exchange 2003 on SBS 2003 R2, but only for calendar sharing and contacts sharing. Their email is still coming to their clients via a POP3 account on each client's Outlook. I'd like to move them over to using Exchange for both email and the other things they are utilizing it for now. Can you folks guide me in the right direction? The setup: external domain is akin to domain.com (and is where they get their POP3 email from now) internal domain is akin to domain.local only simple hardware firewall (no ISA) static external IP is available to use My "assumptions": Setup SMTP default connector in Exchange for their existing external domain Have their existing email backed up to PST files (just in case) Setup the new MX records to point domain.com to the static external IP I'm a little confused how I'm going to setup their existing Exchange accounts with the proper SMTP address though. Right now it is just user@domain.local. Do I just need to modify or create a new recipient policy? Are there other steps involved that I'm missing? Anyone with a walkthrough or even a basic "steps" is fine. I'm fairly used to Exchange 03, but I've been on Exchange 07 for a while now so going back is the weird part...plus I don't know what issues Exchange 03 on SBS has versus the normal "version". Thanks for all the help!

    Read the article

  • How can I map a Windows group login to the dbo schema in a database?

    - by Christian Hayter
    I have a database for which I want to restrict access to 3 named individuals. I thought I could do the following: Create a local Windows group on the database server and add the named individuals to it. Create a Windows login in SQL Server mapped to the local Windows group. Map the login to the "dbo" schema in the database, so that the users can access all objects without having to qualify them with the schema name. When I try to do step 3, I get the following error: Msg 15353, Level 16, State 1, Line 1 An entity of type database cannot be owned by a role, a group, an approle, or by principals mapped to certificates or asymmetric keys. I have tried to do this via the IDE, the sp_changedbowner sproc, and the ALTER AUTHORIZATION command, and I get the same error each time. After searching MSDN and Google, I find that this restriction is by design. Great, that's useful. Can anyone tell me: Why this restriction exists? It seems very arbitrary. More importantly, can I accomplish my requirement some other way? Other info that might be pertinent: The server is fully up to date with service packs and hotfixes. All objects in the database are owned by the "dbo" schema, and it's not feasible to change that. The database is running in compatibility level 80, and it's not feasible to change that to 90 yet. I am free to make any other changes (within reason, depending on what they are).

    Read the article

  • How To Grant Folder permissions for WMSvc IIS

    - by LillyPop
    Im using Web Deploy with IIS7 and I want to grant permissions on a web site physical folder. Ive done this before (as I have another physical folder with r/w for WMSvc) but I have forgotten how I did it! When I go to the physical folder Security Tab Edit Add Object Name = WMSvc Check Names, I get 'An object named WMSvc cannot be found'?? I have the 'WMSvc' object listed fine in the 'Groups or usernames' on the other Folder I mentioned above. I feel a bit daft, what am I doing wrong, how can give folder permissions to WMSvc object on a physical folder?

    Read the article

  • RDS, RDWeb, and RemoteApp: How to use public certificate for launching apps on session host?

    - by Bret Fisher
    Question: How do i tell RDWeb to launch apps from remote.domain.com rather then host.internaldomain.local? Environment: Existing org with AD forest. New single Server 2012 running all Remote Desktop Services roles for session host. Used the new 2012 wizard to setup "QuickSessionCollection" with roles: RD Session Host RD Connection Broker RD Gateway RD Web Access RD Licensing Everything works with self-signed cert, but we want to prevent those. The users are potentially non-domain machines so sticking a private root cert for on their machines isn't an option. Every part of the solution needs to use public cert. Added public remote.domain.com cert to all roles using Server Manager GUI: RD Connection Broker - Enable Single Sign On RD Connection Broker - Publishing RD Web Access RD Gateway So now everything works beautifully except the last step: user logs into https://remote.domain.com user clicks a app icon, which in background downloads a .rdp file that is signed by remote.domain.com. .rdp is set to use RD Gateway, which is remote.domain.com .rdp says app is hosted on internal host.internaldomain.local, which doesn't match the RDP-tcp TLS cert of remote.domain.com, and pops a warning. It's this last step that I'd like to fix. Is there a config option in PowerShell, WMI, or .config to tell RDWeb/RemoteApp to use remote.domain.com for all published apps so the TLS cert for RDP matches what the Session Host is using? NOTE: This question talks about this issue, and this answer mentions how you might fix it in 2008, but that GUI doesn't exist in 2012 for RemoteApp, and I can't find a PowerShell setting for it. NOTE: Here's a screenshot of the setting in 2008R2 that I need to change. It tells RemoteApp what to use for the Session Host server name. How can I set that in 2012?

    Read the article

  • Getting Perl DBD::mysql working on OS X 10.7?

    - by Bart B
    I can't seem to get Perl & MySQL to talk to each other on OS X 10.7 Lion. I did all the installs by the book, I used Oracle's PKG installer for the latest MySQL Community Server, and I installed DBI and DBD::mysql via CPAN. There were not problems at all during the install, but, when I try to USE DBD::mysql to connect to my local DB server I get the following error: install_driver(mysql) failed: Can't load '/Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle' for module DBD::mysql: dlopen(/Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle, 1): Library not loaded: /usr/local/mysql/lib/libmysqlclient.16.dylib Referenced from: /Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle Reason: image not found at /System/Library/Perl/5.12/darwin-thread-multi-2level/DynaLoader.pm line 204. at (eval 3) line 3 Compilation failed in require at (eval 3) line 3. Perhaps a required shared library or dll isn't installed where expected After a lot of googling all I could find were suggested hacks, so I gave this one a go: http://arkoftech.wordpress.com/2011/02/10/fixing-dbdmysql-for-mysql-5-5-89-under-macos-10-6-x/ I had to update some of the paths in the instructions since on Lion it's Perl 5.12 not 5.10. After doing that I got a new error: dyld: lazy symbol binding failed: Symbol not found: _mysql_init Referenced from: /Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle Expected in: flat namespace dyld: Symbol not found: _mysql_init Referenced from: /Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle Expected in: flat namespace Trace/BPT trap: 5 There must be a simple way to get MySQL & Perl working on OS X? - HELP!

    Read the article

  • PPTP Client setup, Fedora 17

    - by Suarez Romina
    I am trying to connect to hidemyass.com VPN services via PPTP, but I am having issues understanding why it isn't working, since I don't get a warning or fatal error and my IP remains the same. This is how i create the connection: [root@lasvegas-nv-datacenter ~]# pptpsetup --create TUNNELNAME --server 199.58.165.20 --username MYUSERNAME --password MYPASSWORD --encrypt --start And this is the output: Using interface ppp0 Connect: ppp0 <-- /dev/pts/1 CHAP authentication succeeded MPPE 128-bit stateless compression enabled local IP address 10.200.21.14 remote IP address 10.200.20.1 After that, I check the log and this is what i get: [root@lasvegas-nv-datacenter ~]# tail -f /var/log/messages Aug 24 11:25:33 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_rep:pptp_ctrl.c:254]: Sent control packet type is 1 'Start-Control-Connection-Request' Aug 24 11:25:33 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:754]: Received Start Control Connection Reply Aug 24 11:25:33 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:788]: Client connection established. Aug 24 11:25:34 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_rep:pptp_ctrl.c:254]: Sent control packet type is 7 'Outgoing-Call-Request' Aug 24 11:25:34 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:873]: Received Outgoing Call Reply. Aug 24 11:25:34 lasvegas-nv-datacenter pptp[3892]: anon log[ctrlp_disp:pptp_ctrl.c:912]: Outgoing call established (call ID 0, peer's call ID 20096). Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: CHAP authentication succeeded Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: MPPE 128-bit stateless compression enabled Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: local IP address 10.200.21.14 Aug 24 11:25:38 lasvegas-nv-datacenter pppd[3884]: remote IP address 10.200.20.1 Can someone help me? Basically, i Ieed to connect to the VPN and have my IP changed after the connection. I read a lot of guides but still cannot understand why I don't get a connection.

    Read the article

  • How to perform SCP as a Sudo user

    - by Ramesh.T
    What is the best way of doing SCP from one box to the other as a sudo user. There are two servers Server A 10.152.2.10 /home/oracle/export/files.txt User : deploy Server B 10.152.2.11 /home/oracle/import/ User : deploy Sudo user : /usr/local/bin/tester all i want is to copy files from server A to Server B as a sudo user... In order to do this, first i normally login as deploy user on the target server and then switch as a sudo user without password. after that SCP to copy file, this is the normal way i perform this activity... In order to auotmate i have written script #!/bin/sh ssh deploy@lnx120 sudo /usr/local/bin/tester "./tester/deploy.sh" I have generated the private key for deploy user, so it allows me to login as deploy user without password. afterthar the sudo command is executed it will switch the user to tester... after that nothing happens.. i mean the script is not getting executed ... is there any way to accomplish this in a different way...

    Read the article

  • Using dnsmasq for accessing multiple nameservers assigned by DHCP

    - by Ash
    At my work desktop running openSUSE 11.4, I have a local network which gets its address, domain (work.site) and nameservers (10.100.1.1, 10.100.1.2) info through DHCP - which get written into /etc/resolv.conf I get to access the internet using the work network, and these 2 nameservers end up returning the entries for any public domain name lookups on the internet. I also have a private VPN that I end up connecting. The nameserver (10.111.1.1) and domain (private.site) are rarely bound to change for this network, but currently they're pushed by the openVPN client into networkmanager, and which also gets merged with the existing /etc/resolv.conf My resolv.conf ultimately ends up looking like this: search private.site work.site nameserver 127.0.0.1 nameserver 10.111.1.1 nameserver 10.100.1.1 As you can see the 2nd nameserver from my work network was pushed out because of the max 3 entry limitations. It is fine still, but would be a problem if that nameserver goes down for maintenance or something. So I found out that dnsmasq could help me here, and hence I setup dnsmasq just as a local DNS resolver without any DHCP support. So right now this is my /etc/dnsmasq.conf: resolv-file=/etc/resolv.conf server=/private.site/10.111.1.1 server=/1.111.10.in-addr.arpa/10.111.1.1 listen-address=127.0.0.1 bind-interfaces log-queries I've made dnsmasq get the list of nameservers from /etc/resolv.conf since NetworkManager seems to be updating this list correctly (for a max of 3 nameservers). I'm able to resolve the host names in both the networks correctly. So these are the questions I have: Is there a way I can make either NetworkManager or dhclient write out the list of nameservers somewhere else which I can make dnsmasq use as resolv-file ? How do I make dnsmasq use certain nameservers as the default for all queries ? Right now I notice that lookups for public domains on the internet are usually sent to both the nameservers - the one on work.site as well as private.site. It would be good if I can limit this only to work.site.

    Read the article

  • Help diagnosing Likewise Open Active Directory authentication problem

    - by purpletonic
    I have two servers which were up until recently authenticating against the companies Active Directory Domain controller. I believe a recent change to the Active Directory administrator password caused the servers to stop authenticating against AD. I tried to add the servers back to the domain using the command: domainjoin-cli join example.com adusername this seemed to work without complaints, but when I try to login via ssh with my domain account, I get an invalid password error. When I run the command: lw-enum-users it prints all of the domain users, and looking up my own account, I see that it is valid and my password hasn't expired. I also ran lw-get-status and received the following: LSA Server Status: Agent version: 5.0.0 Uptime: 0 days 3 hours 35 minutes 46 seconds [Authentication provider: lsa-activedirectory-provider] Status: Online Mode: Un-provisioned Domain: example.com Forest: example.com Site: Default-First-Site-Name Online check interval: 300 seconds \[Trusted Domains: 1\] \[Domain: EXAMPLE\] DNS Domain: example.com Netbios name: EXAMPLE Forest name: example.com Trustee DNS name: Client site name: Default-First-Site-Name Domain SID: S-1-5-24-1081533780-4562211299-822531512 Domain GUID: 057f0239-7715-4711-e64b-eb5eeed20e65 Trust Flags: \[0x001d\] \[0x0001 - In forest\] \[0x0004 - Tree root\] \[0x0008 - Primary\] \[0x0010 - Native\] Trust type: Up Level Trust Attributes: \[0x0000\] Trust Direction: Primary Domain Trust Mode: In my forest Trust (MFT) Domain flags: \[0x0001\] \[0x0001 - Primary\] \[Domain Controller (DC) Information\] DC Name: dc1.example.com DC Address: 10.11.0.103 DC Site: Default-First-Site-Name DC Flags: \[0x000003fd\] DC Is PDC: yes DC is time server: yes DC has writeable DS: yes DC is Global Catalog: yes DC is running KDC: yes [Authentication provider: lsa-local-provider] Status: Online Mode: Local system Anyone got any ideas what might be occurring? Thanks in advance!

    Read the article

  • Ubuntu 10.04: Unable to Start RabbitMQ Server Post-Installation

    - by Garland W. Binns
    After installing RabbitMQ on Ubuntu 10.04 I receive a failure message that the service was unable to start. Any insight into the issue would be greatly appreciated! Below are contents of startup_log and startup_err. Startup_log: {error_logger,{{2012,7,7},{15,50,31}},"Protocol: ~p: register error: ~p~n",["inet_tcp",{{badmatch,{error,etimedout}},[{inet_tcp_dist,listen,1},{net_kernel,start_protos,4},{net_kernel,start_protos,3},{net_kernel,init_node,2},{net_kernel,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}]} {error_logger,{{2012,7,7},{15,50,31}},crash_report,[[{initial_call,{net_kernel,init,['Argument__1']}},{pid,<0.20.0>},{registered_name,[]},{error_info,{exit,{error,badarg},[{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{ancestors,[net_sup,kernel_sup,<0.9.0>]},{messages,[]},{links,[#Port<0.100>,<0.17.0>]},{dictionary,[{longnames,false}]},{trap_exit,true},{status,running},{heap_size,987},{stack_size,24},{reductions,512}],[]]} {error_logger,{{2012,7,7},{15,50,31}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfa,{net_kernel,start_link,[[rabbitmqprelaunch877,shortnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} {error_logger,{{2012,7,7},{15,50,31}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},{mfa,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} {error_logger,{{2012,7,7},{15,50,31}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]} {"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"} Startup_err: Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}})

    Read the article

  • Can't remotely connect through SQL Server Management Studio

    - by FAtBalloon
    I have setup a SQL Server 2008 Express instance on a dedicated Windows 2008 Server hosted by 1and1.com. I cannot connect remotely to the server through management studio. I have taken the following steps below and am beyond any further ideas. I have researched the site and cannot figure anything else out so please forgive me if I missed something obvious, but I'm going crazy. Here's the lowdown. The SQL Server instance is running and works perfectly when working locally. In SQL Server Management Studio, I have checked the box "Allow Remote Connections to this Server" I have removed any external hardware firewall settings from the 1and1 admin panel Windows firewall on the server has been disabled, but just for kicks I added an inbound rule that allows for all connections on port 1433. In SQL Native Client configuration, TCP/IP is enabled. I also made sure the "IP1" with the server's IP address had a 0 for dynamic port, but I deleted it and added 1433 in the regular TCP Port field. I also set the "IPALL" TCP Port to 1433. In SQL Native Client configuration, SQL Server Browser is also running and I also tried adding an ALIAS in the I restarted SQL server after I set this value. Doing a "netstat -ano" on the server machine returns a TCP 0.0.0.0:1433 LISTENING UDP 0.0.0.0:1434 LISTENING I do a port scan from my local computer and it says that the port is FILTERED instead of LISTENING. I also tried to connect from Management studio on my local machine and it is throwing a connection error. Tried the following server names with SQL Server and Windows Authentication marked in the database security. ipaddress\SQLEXPRESS,1433 ipaddress\SQLEXPRESS ipaddress ipaddress,1433 tcp:ipaddress\SQLEXPRESS tcp:ipaddress\SQLEXPRESS,1433

    Read the article

  • Windows Server 2008 Create Symbolic Link, updated Security Policy still gives privilege error

    - by Matt
    Windows Server 2008, RC2. I am trying to create a symbolic/soft link using the mklink command: mklink /D LinkName TargetDir e.g. c:\temp\>mklink /D foo bar This works fine if I run the command line as Administrator. However, I need it to work for regular users as well, because ultimately I need another program (executing as a user) to be able to do this. So, I updated the Local Security Policy via secpol.msc. Under "Local Policies" "User Rights Management" "Create symbolic links", I added "Users" to the security setting. I rebooted the machine. It still didn't work. So I added "Everyone" to the policy. Rebooted. And STILL it didn't work. What on earth am I doing wrong here? I think my user is even an Administrator on this box, and running plain command line even with this updated policy in place still gives me: You do not have sufficient privilege to perform this operation.

    Read the article

  • Our application is responding different on client server

    - by WtFudgE
    I have an application running on our local server and it works perfect. However when I deploy my application to our client's server there are problems. If i copy all the sources to another local from their server, it works fine again.... I am talking about an ASP.NET application talking with windows server 2008. The server specs of our client are: intel Xeon 2.13 GHz 6 GB RAM 300 GB HDD windows server 2008 R2 Although I think it's not that important I will describe the problems I am talking about to give u a better idea: It is related to upating and deleting fields in the sql server. Whenever there is more than one user running the application, and updates or deletes (not even the same record!0, there seems to be fields updating/deleting wrongly. But as I said before, if we copy the source to another server it is not the case. So I assumed it is more configuration related. Do you guys have any idea what the issue could be? Thanks a lot EDIT: my client old me he disabled the sql related accounts Could this be related?

    Read the article

  • ASA DHCP Relay configuration..

    - by Jeff
    I have locations in different cities, connected using 2 Cisco ASA devices. my main location, corporate, use the IP 192.168.1.x The second location, remote store, use the IP 192.168.3.x I have a DHCP server (192.168.1.254) at my corporate location. I have created a scope for the 192.168.1.x which works fine for the corporate location. I created a scope for the remote location (192.168.3.x) on my DHCP server and tried to configure the remote ASA DCHP Relay, on the remote ASA: I disabled the DHCP Server on the inside. I enabled DHCP Relay on the inside, with set route set at yes. I set the Global DHCP Relay Servers, specify up to four servers to which DHCP requests would be relayed. I added my DHCP, 192.168.1.254 I flashed these settings to the ASA and gave it a try, didn't do anything. am i missing something - forgetting something. not really sure what im doing wrong. DHCP Settings on remote ASA: dhcp-client update dns server both dhcpd dns 192.168.1.254 dhcpd ping_timeout 750 dhcpd domain JEWELS.LOCAL dhcpd auto_config outside dhcpd update dns both ! dhcpd address 192.168.3.2-192.168.3.33 inside ! dhcprelay server 192.168.1.254 outside dhcprelay enable inside dhcprelay setroute inside on my local ASA: i have two ACLs for UDP ports 67 and 68 permitting any inbound traffic from the remote locations IP ... dhcprelay timeout 120

    Read the article

  • How to Creat custom content for nginx error 502 page, keep origin url on browser

    - by user123862
    i'm trying to get custom language and message for nginx error page but keep url on browser.. not success for eg: i go to url : xaluan.com/aaa/bbb.html on the time server down.. nginx will show error 502. with the same url but custom message as my language. test 1. I created a custom page at /usr/local/nginx/html/205.html as following config but it show on web site when error is default nginx error at domain.com/50.html ( the content of webpage not same as i created) error_page 502 /502.html; location = /502.html { root /usr/local/nginx/html; } test 2. Then i create same page at my www domain folder /home/xaluano/public_html/502.html but this keep redirect me to root domain.com/502.html the content now same as i created. but.. the url still not as i need error_page 502 /502.html; location = /502.html { root /home/xaluano/public_html; internal; } EDIT UPDATE for more detail 10/06/2012 please download my nginx config http://pastebin.com/7iLD6WQq and vhost config following: http://pastebin.com/ZZ91KiY6 == the case test.. if apache httpd service stop: #service httpd stop then open browser go to: xaluan.com/modules.php?name=News&file=article&sid=123456 I will see the 502 error with the same url on browser address == Custome error page I need the config which help when apache fail .. will show the custom message tell user wail for 1 minute for service back then refress current page with same url ( refresh I can do easy by javascript ), Nginx dosent change url so java-script can work out. any help will be great.. thank in advance

    Read the article

  • Is there a tool that can test what SSL/TLS cipher suites a particular website offers?

    - by Jeremy Powell
    Is there a tool that can test what SSL/TLS cipher suites a particular website offers? I've tried openssl, but if you examine the output: $ echo -n | openssl s_client -connect www.google.com:443 CONNECTED(00000003) depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com i:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA 1 s:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- MIIDITCCAoqgAwIBAgIQL9+89q6RUm0PmqPfQDQ+mjANBgkqhkiG9w0BAQUFADBM MQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg THRkLjEWMBQGA1UEAxMNVGhhd3RlIFNHQyBDQTAeFw0wOTEyMTgwMDAwMDBaFw0x MTEyMTgyMzU5NTlaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlh MRYwFAYDVQQHFA1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKFApHb29nbGUgSW5jMRcw FQYDVQQDFA53d3cuZ29vZ2xlLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC gYEA6PmGD5D6htffvXImttdEAoN4c9kCKO+IRTn7EOh8rqk41XXGOOsKFQebg+jN gtXj9xVoRaELGYW84u+E593y17iYwqG7tcFR39SDAqc9BkJb4SLD3muFXxzW2k6L 05vuuWciKh0R73mkszeK9P4Y/bz5RiNQl/Os/CRGK1w7t0UCAwEAAaOB5zCB5DAM BgNVHRMBAf8EAjAAMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwudGhhd3Rl LmNvbS9UaGF3dGVTR0NDQS5jcmwwKAYDVR0lBCEwHwYIKwYBBQUHAwEGCCsGAQUF BwMCBglghkgBhvhCBAEwcgYIKwYBBQUHAQEEZjBkMCIGCCsGAQUFBzABhhZodHRw Oi8vb2NzcC50aGF3dGUuY29tMD4GCCsGAQUFBzAChjJodHRwOi8vd3d3LnRoYXd0 ZS5jb20vcmVwb3NpdG9yeS9UaGF3dGVfU0dDX0NBLmNydDANBgkqhkiG9w0BAQUF AAOBgQCfQ89bxFApsb/isJr/aiEdLRLDLE5a+RLizrmCUi3nHX4adpaQedEkUjh5 u2ONgJd8IyAPkU0Wueru9G2Jysa9zCRo1kNbzipYvzwY4OA8Ys+WAi0oR1A04Se6 z5nRUP8pJcA2NhUzUnC+MY+f6H/nEQyNv4SgQhqAibAxWEEHXw== -----END CERTIFICATE----- subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com issuer=/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA --- No client certificate CA names sent --- SSL handshake has read 1777 bytes and written 316 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 748E2B5FEFF9EA065DA2F04A06FBF456502F3E64DF1B4FF054F54817C473270C Session-ID-ctx: Master-Key: C4284AE7D76421F782A822B3780FA9677A726A25E1258160CA30D346D65C5F4049DA3D10A41F3FA4816DD9606197FAE5 Key-Arg : None Start Time: 1266259321 Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) --- it just shows that the cipher suite is something with AES256-SHA. I know I could grep through the hex dump of the conversation, but I was hoping for something a little more elegant. I would prefer Linux tools, but Windows (or other) would be fine. This question is motivated by the security testing I do for PCI and general penetration testing. Update: GregS points out below that the SSL server picks from the cipher suites of the client. So it seems I would need to test all cipher suites one at a time. I think I can hack something together, but is there a tool that does particularly this?

    Read the article

  • How to upgrade Apache 2 from 2.2 to 2.4

    - by Nina
    I was in the process of doing a test upgrade from Apache 2.2 to 2.4.3. I'm using Ubuntu 10.04. I would have upgraded to 12.04 for this to see if the upgrade would go a lot smoother. Unfortunately, I was told it wasn't an option...so I'm stuck using 10.04. The process I did this was: Before attempting this, I have managed to upgrade APR from 1.3 to 1.4 as well since apache told me it was a requirement beforehand: http://apr.apache.org/download.cgi First remove all traces of the current apache: sudo apt-get --purge remove apache2 sudo apt-get remove apache2-common apache2-utils apache2.2-bin apache2-common sudo apt-get autoremove whereis apache2 sudo rm -Rf /etc/apache2 /usr/lib/apache2 /usr/include/apache2 Afterwards, I did the following: sudo apt-get install build-essential sudo apt-get build-dep apache2 Then install apache 2.4 with the following: wget http://apache.mirrors.tds.net//httpd/httpd-2.4.3.tar.gz tar -xzvf httpd-2.4.3.tar.gz && cd httpd-2.4.3 sudo ./configure --prefix=/usr/local/apache2 --with-apr=/usr/local/apr --enable-mods-shared=all --enable-deflate --enable-proxy --enable-proxy-balancer --enable-proxy-http --with-mpm=prefork sudo make sudo make install After the make install, I ended up getting a series of errors that prevented it from installing correctly: exports.c:2513: error: redefinition of 'ap_hack_apr_uid_current' exports.c:1838: note: previous definition of 'ap_hack_apr_uid_current' was here exports.c:2514: error: redefinition of 'ap_hack_apr_uid_name_get' exports.c:1839: note: previous definition of 'ap_hack_apr_uid_name_get' was here exports.c:2515: error: redefinition of 'ap_hack_apr_uid_get' exports.c:1840: note: previous definition of 'ap_hack_apr_uid_get' was here exports.c:2516: error: redefinition of 'ap_hack_apr_uid_homepath_get' Looking for exports.c only leads me back to the httpd-2.4.3 folder. So I'm not sure what these errors mean... Thanks in advance for any help you have to offer!

    Read the article

  • Windows Server (SBS) 2008 - Telephony service won't start (missing permissions)

    - by Uri
    I am running a SBS 2008 server. It's setup as the domain controller for the network. After a reboot, the Telephony service (and all services that depend on it) refuses to start under the Network Service account. The error given is: Error 1297: A privilege that the service requires to function properly does not exist in the service account configuration. You may use the Services Microsoft Management Console (MMC) snap-in (services.msc) and the Local Security Settings MMC snap-in (secpol.msc) to view the service configuration and the account configuration. This has caused all the network services not to be accessible e.g. terminal services, VPN (RRAS), SQL Server instances. The SSH daemon I have running on the box will accept connections only from localhost, but won't respond on the network. After searching around, the only advice I could find was to grant the Network Service account these permissions: Adjust memory quotas for a process Replace a process level token I set those permissions on both the Default Domain Policy and the Default Domain Controller Policy, but it seemingly had no effect. Most of the services will start if I change them to run under the Local System account, but that didn't make them accessible on the network. I even tried removing the Routing and Remote Access Services feature, rebooting and reinstalling it, but the issue remains. Any ideas?

    Read the article

  • In Windows 7, why can't I use perfmon against a remote server?

    - by SomeGuy
    I am on Windows 7 and trying to run perfmon against Windows 2003 and Windows 2008 servers. I am running into the same issue with all remote machines. When creating a data collector set, I specify a domain account that is in the administrators group on the remote machines (and "Performance Log Users" and "Performance Monitor Users" to be safe). On the "Available Counters" screen, When I type in a remote computer name, PerfMon locks up for a good 2-3 minutes before I can add any counters. I can then save the collector set. However, when I save it, the go/stop buttons are disabled if I click the set in the left panel, and missing if I click the Data collector set itself in the right panel. See the screens below. I can run data collector sets against my local machine with no problem. I am opening perfmon with my local account in both scenarios. I also have Remote Registry Service started on each remote machine. What is going on?

    Read the article

  • How to disable 3rd party cookies in Chrome?

    - by David Nordvall
    I have both the "stop websites from storing local data" and the "block all third party cookies without exception" settings enabled in Chrome 12 (I'm not sure what the exact names of these settings are in english as I run Chrome with swedish localization). I do however have two problems. My first problem is that when I'm visiting one of my local news paper's site (and surely other), cookies from www.facebook.com is allowed for some reason. I suspect that the reason is that I have added an exception to the www.facebook.com domain but as the setting "block all third party cookies without exception" implies, that shouldn't matter. My second problem is that if I check what cookies are stored on my computer after browsing for a while, I have tons of cookies that are not on my white list. Primarily from ad services. My expectations from enabling the above mentioned settings was that only cookies that fulfill the two folling requirements would be accepted: the cookies must be from the domain in my address bar the cookies must be from a domain on my whitelist Apparently this isn't the case. The question is, have I completely misunderstood the settings or is this a bug? And, either way, is there a way to accomplish my desired behavior?

    Read the article

  • Network connection to Firebird 2.1 became slow after upgrading to Ubuntu 10.04

    - by lyle
    We've got a setup that we're using for different clients : a program connecting to a Firebird server on a local network. So far we mostly used 32bit processors running Ubuntu LTS (recently upgraded to 10.04). Now we introduced servers running on 64bit processors, running Ubuntu 10.04 64bit. Suddenly some queries run slower than they used to. In short: running the query locally works fine on both 64bit and 32bit servers, but when running the same queries over the network the 64bit server is suddenly much slower. We did a few checks with both local and remote connections to both 64bit and 32bit servers, using identical databases and identical queries, running in Flamerobin. Running the query locally takes a negligible amount of time: 0.008s on the 64bit server, 0.014s on the 32bit servers. So the servers themselves are running fine. Running the queries over the network, the 64bit server suddenly needs up to 0.160s to respond, while the 32bit server responds in 0.055s. So the older servers are twice as fast over the network, in spite of the newer servers being twice as fast if run locally. Apart from that the setup is identical. All servers are running the same installation of Ubuntu 10.04, same version of Firebird and so on, the only difference is that some are 64 and some 32bit. Any idea?? I tried to google it, but I couldn't find any complains that Firebird 64bit is slower than Firebird 32bit, except that the Firebird 2.1 change log mentions that there's a new network API which is twice as fast, as soon as the drivers are updated to use it. So I could imagine that the 64bit driver is still using the old API, but that's a bit of a stretch, I guess. Thanx in advance for any replies! :)

    Read the article

  • How a batch file runs on a remote machine started by PSEXEC

    - by user38780
    I am having an issue running a Batch file on a remote machine suing PSEXEC. The file runs but does not run like it does when run through remote desktop. The batch runs a file which is a 32 bit application, which opens multiple 16bit applications, this should all run under one ntvdm.exe (In one Memory Space). Through remote desktop the batch file runs under the explorer process, and works correctly opening only one ntvdm.exe. Using PSEXEC the batch runs but not under the explorer process, a separate ntvdm.exe is open for each process. I found running the batch from explorer in PSEXEC works, but comes up with a "File Download - Security Warning" eg. psexec.exe" \compname -u username -p passowrd -s -d -i 0 explorer C:\Program.bat I want to be able to run the batch successfully without receiving warnings, it is a local warning and not a network share warning. Possible to recreate warning typing "explorer C:\windows\system32\cmd.exe" in Run I would like to know if anyone knows of a way to get PSEXEC to open the batch file to run as though it was started by explorer. Or a way of removing the local "File Download - Security Warning" Thanks

    Read the article

< Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >