Search Results

Search found 24207 results on 969 pages for 'anonymous users'.

Page 600/969 | < Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >

  • How do I configure a secondary gateway in RHEL5?

    - by Brett Ryan
    Greetings, we have been experiencing a random timeout issue with VPN users connecting to one of our servers which is causing a problem. My network administrator has instructed me to configure a secondary gateway to include the VPN connection. My current connection resides as follows, 10.1.9.1 is the internal gateway to the internet, I'd like to add 10.1.1.20 as the VPN gateway. # Broadcom Corporation NetXtreme II BCM5708S Gigabit Ethernet DEVICE=eth0 BOOTPROTO=none BROADCAST=10.1.255.255 IPADDR=10.1.1.22 IPV6_AUTOCONF=yes NETMASK=255.255.0.0 NETWORK=10.1.0.0 ONBOOT=yes GATEWAY=10.1.9.1 TYPE=Ethernet USERCTL=no IPV6INIT=no PEERDNS=yes

    Read the article

  • How do I package this vbscript as a msi for Group Policy

    - by TheCleaner
    I had a developer that is no longer with us create an msi to do this for me, but the package is outdated now and we need to deploy new files. Basically I need to do the following: Take the code at the bottom of this question and deploy it to all users as a software install package in Group Policy. I don't want to use a computer startup script because I don't want this to run at every login...just once to install and be done. How can I take the below and turn it into an msi for deployment through GPO? @echo off delete "C:\Windows\Downloaded Program Files\jdeexpimp.inf" delete "C:\Windows\Downloaded Program Files\jdeexpimpU.ocx" delete "C:\Windows\Downloaded Program Files\jdewebctls.inf" delete "C:\Windows\Downloaded Program Files\jdewebctlsU.ocx" copy "\\tuldc01\EOneActiveXapplets\ActiveX898\jdeexpimpU\*" "C:\Windows\Downloaded Program Files\" copy "\\tuldc01\EOneActiveXapplets\ActiveX898\jdewebctlsU\*" "C:\Windows\Downloaded Program Files\" regsvr32 "C:\Windows\Downloaded Program Files\jdeexpimpU.ocx" regsvr32 "C:\Windows\Downloaded Program Files\jdewebctlsU.ocx"

    Read the article

  • How to update Sharepoint 2010 user profile for user whose account name has changed in AD?

    - by Daniel Root
    We have an issue with User Profile Sync in SharePoint 2010 when the following happens: A new user is added to AD (ie DOMAIN\jdoh) The user is synched successfully to SharePoint Time passes The user's account name is changed in AD (ie because it was originally misspelled: DOMAIN\jdoe) The user is re-synced to SharePoint The behavior appears to be that the account name is not changed. In the above example, accountname will continue to be DOMAIN\jdoh in SharePoint, though other properties are synced correctly- I would assume by SID. This means that the users' my profile and mysite links still refer to the 'old' name (ie Person.aspx?accountname=Domain\jdoh). What steps should be taken to fix this in SharePoint when an account name is changed in AD?

    Read the article

  • Windows/Samba connection error

    - by Gomibushi
    I have a Linux fileserver serving up /home for linux and windows users. I was able to connect from my windows client, but not from a DC. Then suddenly I could connect from the DC too. The linux servers run Centrify clients, and as such are part of the domain. All on same subnet. This is what the the log.smbd says, repeatedly: [2010/02/11 11:25:57, 0] lib/util_sock.c:read_data(534) read_data: read failure for 4 bytes to client 192.168.200.3. Error = Connection reset by peer On Windows it appeared as an "unknown error". EDIT: the error code is "0x80004005". We are developing a system depended on the samba share, and are worried this will appear again. It would be nice to pin point the root of this. Any ideas what this might be? Places to look?

    Read the article

  • How can I move mysites to a new location

    - by Bob
    I recently restored my content and was instructed to create mysites in a different location than was originally used. Now I have several users mysites in /personal. The new desired location is /mysites. From what I found in the documentation I should back them up and restore them to the new location. Here's what I've done: Backup individual site collection for user mysite stsadm -o backup -url "https://myUrl/personal/john_smith" -filename johnsmith.bkup Restore individual site collection for user mysite stsadm -o restore -url "https://myUrl/mysites/john_smith" -filename johnsmith.bkup -overwrite The result of this and the problem is when i enumerate sites i end up with this: <Site Url="https://myUrl/mysites" Owner="domainname\john.smith" ContentDatabase="WSS_Content_MySites" StorageUsedMB="1.6" StorageWarningMB="90000" StorageMaxMB="100000" /> it leaves off the username part of the url. and if I restore more that one they want to overwrite each other.

    Read the article

  • PostgresQL on Amazon EBS volume, realistic performance, or move to something more lightweight?

    - by Peck
    Hi, I'm working on a little research project, currently running as an instance on ec2, and I'm hoping to figure out whether I'm going down the right path. We, like a thousand other people, are making use of some of twitters streaming feeds to do gather some data to have fun with and my db seems to be having problems keeping up, and queries take what seems to be a very long time. I'm not a DBA by trade, so I'll just dump some info here and add more if need be. System specs: ec2 xl, 15 gigs of ram ebs: 4 100 gb drives, raid 0. The stream we're getting we're looking at around 10k inserts per minute. 3 main tables, with the users we're tracking somewhere in the neighborhood of 26M rows currently. Is this amount of inserts on this hardware too much to ask out of ebs? Should take a look at some things with less overhead like mongodb?

    Read the article

  • EJB Persist On Master Child Relationship

    - by deepak.siddappa(at)oracle.com
    Let us take scenario where in users wants to persist master child relationship. Here will have two tables dept, emp (using Scott Schema) which are having master child relation.Model Diagram: Here in the above model diagram, Dept is the Master table and Emp is child table and Dept is related to emp by one to n relationship. Lets assume we need to make new entries in emp table using EJB persist method. Create a Emp form manually dropping the fields, where deptno will be dropped as Single Selection -> ADF Select One Choice (which is a foreign key in emp table) from deptFindAll DC. Make sure to bind all field variables in backing bean.Employee Form:Once the Emp form created, If the persistEmp() method is used to commit the record this will persist all the Emp fields into emp table except deptno, because the deptno will be passed as a Object reference in persistEmp method  (Its foreign key reference). So directly deptno can't be passed to the persistEmp method instead deptno should be explicitly set to the emp object, then the persist will save the deptno to the emp table.Below solution is one way of work around to achieve this scenario -Create a method in sessionBean for adding emp records and expose this method in DataControl.     For Ex: Here in the below code 'em" is a EntityManager.            private EntityManager em - will be member variable in sessionEJBBeanpublic void addEmpRecord(String ename, String job, BigDecimal deptno) { Emp emp = new Emp(); emp.setEname(ename); emp.setJob(job); //setting the deptno explicitly Dept dept = new Dept(); dept.setDeptno(deptno); //passing the dept object emp.setDept(dept); //persist the emp object data to Emp table em.persist(emp); }From DataControl palette Drop addEmpRecord as Method ADF button, In Edit action binding window enter the parameter values which are binded in backing bean.     For Ex:     If the name deptno textfield is binded with "deptno" variable in backing bean, then El Expression Builder pass value as "#{backingbean.deptno.value}"Binding:

    Read the article

  • How can I get vim to set an ACL on its swap files?

    - by thsutton
    I use vim on an OS X Snow Leopard Server machine. A number of the directories I work in have ACLs (so that various groups of users can access them over AFP) that are inherited. For some reason, when I'm working in one of these directories, vim cannot read it's own swap files. It can create them fine but can't read them which, for some reason, makes it display the "swap file already exists" message (and no, the swap file does not already exist). vim -r lists the newly created swap file as "[cannot be read]". The owner and group are correct and the permissions are 0600, and the ACLs on the swap file and the file I'm editing are identical (as disclosed by ls -le and compared with diff). groups returns the same thing whether invoked from my login shell or via :! in vim. Has anyone encountered (and hopefully resolved) a problem like this before?

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Beginners Guide to Client Application Services

    - by mbcrump
    What is it? Client application services make it easy for you to create Windows-based applications that use the ASP.NET AJAX login, roles, and profile application services included in the Microsoft ASP.NET 2.0 AJAX Extensions. These services enable multiple Web and Windows-based applications to share user information and user-management functionality from a single server.   What can you do with it? Authenticate a user. You can use the authentication service to verify a user's identity. Determine the role or roles of an authenticated user. You can use the roles service to change the user interface of your application depending on the user's role. For example, you can provide additional features for users who are in an administrator role. Store and access per-user application settings located on the server. You can use the Web settings service (also known as the profile service) to share settings across multiple applications and locations. Client application services take advantage of the Web services extensibility model through client service providers that you can specify in your application configuration files. These service providers include offline functionality that uses a local cache for authentication, roles, and settings data when a network connection is unavailable. Give me an example of where I would use this! Sharing login and user role information between a Windows Form application and a ASP.NET application. How do I configure it? Click Here

    Read the article

  • Two subnets on one switch with no VLAN and possible problems

    - by casey_miller
    As far as I know in order to use two subnets on one physical cable VLAN's are recommended. However, is it possible to achieve this (i.e: two subnets on one physical network) like 192.168.1.0/24 and 10.0.0.0/8 networks. What kind of problems or hidden rocks this way contains? With VLAN's it's possible to better isolate so users wouldn't easly sniff the other network. But in my environment it's okay if user on one subnet could listen to the traffic on another network. Is it the only problem?

    Read the article

  • How to "ignore" username and password prompt in net use

    - by Mattisdada
    I have at the moment a logon.cmd script, that I'm using to map network drives to the users profile. It looks like this: ::Onboarding net use m: /delete net use m: \\BOB\onboarding ::Bookings net use n: /delete net use n: \\BOB\bookings ::Accounts net use j: /delete net use j: \\BOB\accounts It works fine up until it gets up to a folder that the current user cannot access, it then asks for a username and password instead of erroring and continuing. Notes: This very script used to work on another Samba PDC network, but I've moved it over to another server (Still Samba PDC) and now its breaking. Is there anyway for it to ignore the username/password prompt and just continue?

    Read the article

  • Why is mcrypt not included in most Linux distributions?

    - by Daniel Lopez
    libmcrypt is a powerful encryption library that is very popular with PHP-based applications. However, most Linux distributions do not include it. This causes problems for many users that need to download and compile it separately. I am guessing that the reason it is not shipped is related to encryption or patent issues. However, the source code for library itself is hosted and available on sourceforge.net I have been searching unsuccessfully for a document of authoritative post that explains the exact issues why this extension is not bundled with mainstream distributions. Can anyone provide a pointer to such material or provide an explanation?

    Read the article

  • Boot time virus scan from USB drive

    - by Tomas Sedovic
    I want to check for viruses on a computer that I suspect may be infected with malware. Its users are running an antivirus, but there's always the risk that something slips past and the way I see it, once the system is infected the antivirus is useless because the malware can hide itself from the AV. I think the best way to go (besides clean reinstall of the OS) would be to have an antivirus running at a boot time from a CD or a USB key. That way, the malware is just lying on the disk and cannot do any of its hide-and-seek stuff (provided the AV comes from an uninfected PC and all that). So, I'm looking for something that: Runs at boot time (off USB key or CD-ROM) Does not touch or require the local OS Discovers malware fairly well (like, Avast, AVG, Norton, whatever -- I think the're all the same anyway) Can handle Windows filesystems (FAT 32, NTFS, WinFS ;-) ) Comes from some sort of trusted source (no Windows Antivirus 2009) I know that this is no silver bullet (nothing is, really*), but I do have a feeling it's more likely to help than doing the scan* within the infected system.

    Read the article

  • How to setup a reliable SMTP server on Windows Server 2008 R2

    - by everwicked
    I know there are SMTP services out there which you can pay to send e-mails with but surely it's not that difficult to set up one of your own. How can I set up an SMTP server on Windows Server 2008 R2 that is: - Secure; only authorized users/hostnames/etc can send mail - Reliable; e-mails don't get lost - Not treated as spam; when e-mails are received from say gmail/outlook/hotmail they don't go straight to junk ** ** I understand this depends both on the server+e-mail headers AND e-mail content - I'm looking to safeguard the server part. Thanks!

    Read the article

  • Let's Get It Started! Oracle OpenWorld Music Festival

    - by Oracle OpenWorld Blog Team
    By Karen Shamban How are you spending your day at Oracle OpenWorld? At the Oracle Users Forum? Getting some training at Oracle University? Meeting up with colleagues and friends to discuss technology? Doing some or all of the above while enjoying the gorgeous fall weather in San Francisco? Regardless of how your day is going, be sure to attend the opening keynote this evening - starting at 5:00 p.m. -  at Moscone North, Hall D. Larry Ellison is the featured keynoter, so you know he'll have something interesting and intriguing to say. Following the keynote is the Welcome Reception, being held this year in both the Howard Street Tent and Yerba Buena Gardens. Debuting tonight is the Oracle OpenWorld Music Festival and it's going to be awesome! The schedule for this evening is below. Note that due to limited capacity at some venues, admission (free with your Oracle OpenWorld badge) is first-come, first-served.  Enjoy yourself, and rock on! Time Performer Venue 7:00 p.m DJ ZAQ John Colins 7:00 p.m DJ Blondie K. Ruby Skye 7:15 p.m The Velvet Teen Mezzanine 8:30 p.m Astral Mezzanine 8:30 p.m. Macy Gray Yerba Buena Gardens 9:00 p.m. American Steel Mezzanine 8:30 p.m. Magic Wands Ruby Skye 10:00 p.m. The Crystal Method Ruby Skye 10:30 p.m. Dirty Ghosts Mezzanine

    Read the article

  • Windows Server 2008R2 IIS7.5, Requests getting 401 status

    - by TLBH
    We have a web site running in windows server 2008r2 iis7.5 and we are seeing several errors reported from our global error handler saying that "Request Timed Out". I matched up one to the IIS log file and see the request took 135116 (presumably milliseconds) had an sc-status of 401 an sc-sub0status of 0 and an sc-win32-status of 64. 2 requests failed in these way but lots of surrounding requests (1979 successful requests vs 2 fails) for the same user went through perfectly fine- with the same cs-username which makes a 401 seem a little odd. The target of the requests is an ASP.Net web service's web method called by the .net client library- it's called a lot of times per user (3 times per second) to keep a page updated. We're getting some users reportng seeing a freezing effect and I think this may be the cause, any ideas? Peter

    Read the article

  • Hosted application, DNS server setup?

    - by Ward Loockx
    Currently I'm allowing users to have an hosted application. Currently they have to point A-records to our servers (sometimes this is to hard or get's messy). I've seen other players using 2 dns servers, so that the user only needs to change these. I'm willing to implement this, but a lot of questions come up. What should I use for this? Can I use bind? The records need to be generated from a mysql database What type of servers do I need? Is a DNS server taking a lot of load? Currently having around 80K daily visitors. Thanks!

    Read the article

  • Automatically encrypting incoming email

    - by user16067
    I have a small website and would like to encrypt incoming email using gpg. Is there a way to force sendmail to check email and encrypt it if its not already encrypted? I'm using GPG on a linux server. Thanks [added]Someone asked what I hope to accomplish. My intent is for the users of the email to become more familiar with seeing thier own email encrypted and losing that fear of the unknown. The side benefit is that the email can't be looked at later down the road. If the email isn't encrypted on its way in, I'm unable to do anything about it. I'm assuming most email would be nosed around with once its already on my hard drive, so GPG would protect against those issues.

    Read the article

  • Securing credentials passed to web service

    - by Greg Smith
    I'm attempting to design a single sign on system for use in a distributed architecture. Specifically, I must provide a way for a client website (that is, a website on a different domain/server/network) to allow users to register accounts on my central system. So, when the user takes an action on a client website, and that action is deemed to require an account, the client will produce a page (on their site/domain) where the user can register for a new account by providing an email and password. The client must then send this information to a web service, which will register the account and return some session token type value. The client will need to hash the password before sending it across the wire, and the webservice will require https, but this doesn't feel like it's safe enough and I need some advice on how I can implement this in the most secure way possible. A few other bits of relevant information: Ideally we'd prefer not to share any code with the client We've considered just redirecting the user to a secure page on the same server as the webservice, but this is likely to be rejected for non-technical reasons. We almost certainaly need to salt the password before hashing and passing it over, but that requires the client to either a) generate the salt and communicate it to us, or b) come and ask us for the salt - both feel dirty. Any help or advice is most appreciated.

    Read the article

  • How to assign output bin on HP4730?

    - by user38138
    We have an application that prints batches of invoices and when two users print same time, their jobs get interspersed because the application actually generates a separate job for each invoice. The HP4730 has a bin for phocopies/fax and the bulk bin. A proposal was to create a separate printer definition for each user, and somehow map their output to a different bin to keep their jobs "together". However, we can't see any setting to control this on the printer properties. Does anyone know if thats possible? To make it more interesting, this is within Citrix... Help!

    Read the article

  • What is the solid Windows Server 2008 VPS hosting (preferably) in Europe?

    - by Jakub Šturc
    I am looking for Windows Server 2008 VPS hosting to host a few sites/apps with special needs. I spend few hours googling but I am not satisfied with the result. I looks to me that this business is full of crap. Two thirds of the reviews seems to be faked and advertised offers is unavailable or contains some hidden fee. So my question is which VPS hosting with Windows Server 2008 do you recommend? I slightly prefer data center in Europe and euro payment it isn't deal breaker. Note: I didn't made question community wiki because I am interested in answers from users with some reputation and I want to preserve theirs answers.

    Read the article

  • An adequate message authentication code for REST

    - by Andras Zoltan
    My REST service currently uses SCRAM authentication to issue tokens for callers and users. We have the ability to revoke caller privileges and ban IPs, as well as impose quotas to any type of request. One thing that I haven't implemented, however, is MAC for requests. As I've thought about it more, for some requests I think this is needed, because otherwise tokens can be stolen and before we identify this and deactivate the associated caller account, some damage could be done to our user accounts. In many systems the MAC is generated from the body or query string of the request, however this is difficult to implement as I'm using the ASP.Net Web API and don't want to read the body twice. Equally importantly I want to keep it simple for callers to access the service. So what I'm thinking is to have a MAC calculated on: the url, possibly minus query string the verb the request ip (potentially is a barrier on some mobile devices though) utc date and time when the client issues the request. For the last one I would have the client send that string in a request header, of course - and I can use it to decide whether the request is 'fresh' enough. My thinking is that whilst this doesn't prevent message body tampering it does prevent using a model request to use as a template for different requests later on by a malicious third party. I believe only the most aggressive man in the middle attack would be able to subvert this, and I don't think our services offer any information or ability that is valuable enough to warrant that. The services will use SSL as well, for sensitive stuff. And if I do this, then I'll be using HMAC-SHA-256 and issuing private keys for HMAC appropriately. Does this sound enough? Have I missed anything? I don't think I'm a beginner when it comes to security, but when working on it I always. am shrouded in doubt, so I appreciate having this community to call upon!

    Read the article

  • Tracking contributions from contributors not using git

    - by alex.jordan
    I have a central git repo located on a server. I have many contributors that are not tech savvy, do not have server access, and do not know anything about git. But they are able to contribute via the project's web side. Each of them logs on via a web browser and contributes to the project. I have set things up so that when they log on, each user's contributions are made into a cloned repo on the server that is specifically for that user. Periodically, I log on to the server, visit each of their repos, and do a git diff to make sure they haven't done anything bad. If all is well, I commit their changes and push them to the central repo. Of course I need to manually look at their changes so that I can add an appropriate commit message. But I would also like to track who made the changes. I am making the commit, and I (and the web server) are the only users that are actually writing anything to the server. I could track this in the commit messages. While this strikes me as wrong, if this is my only option, is there a way to make userx's cloned repo always include "userx: " before each commit message that I add, so that I do not have to remind myself which user's repo I am in? Or even better, is there an easy way for me to make the commit, but in such a way as I credit the user whose cloned repo I am in?

    Read the article

  • Problem with OWA search. Mailstore catalogues

    - by Anthony
    Dear All, I have trawled for the last few days but have not been able to come up with a solution; any help would be much appreciated. Exchange 2007 STD on Windows 2003 64Bit and users report search not working in OWA. So I first check indexing is enabled on the mail stores Get-MailboxDatabase |ft Name,IndexEnabled which all report true, So I try a test Test-ExchangeSearch [email protected] Which is a false -1 Ok, So I run the reset - resetsearchindex.ps1 this works and the event log reports the stores being crawled and finished. BUT- My complete catalogue data is 300kb - LOL! I wish! Obviously all tests fail so I do it manually; Stop indexing service, Delete catalogues. Restart Still no dice, This is where I’m up to, I have tried many different variations of the above and being through oodles of MS documentation Has anyone got any ideas?

    Read the article

< Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >