Search Results

Search found 47324 results on 1893 pages for 'end users'.

Page 409/1893 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • Reminder: Java EE 7 Job Task Analysis Survey – Participants Needed

    - by Brandye Barrington
    Java EE Developers/Practitioners, Recruiters, Managers Hiring Java EE Developers: Our Survey Continues.  We're looking to you to directly help shape the scope and definition of two new Java EE 7 Certification exams. We'll soon begin certifying front-end and/or server-side enterprise developers who use Java. We're therefore interested in those of you who:  are currently working with Java EE 7 technology or have plans to develop with Java EE 7 in the near future. have 2-4 years experience with the previous Java EE technology versions. are recruiting and/or hiring candidates to develop Java EE 7 applications. are technically savvy and able to articulate the skills and knowledge required to successfully staff Java Enterprise Edition front-end and server-side projects.

    Read the article

  • I can connect to Samba server but cannot access shares.

    - by jlego
    I'm having trouble getting samba sharing working to access shares. I have setup a stand-alone box running Fedora 16 to use as a file-sharing and web development server. It needs to be able to share files with a Windows 7 PC and a Mac running OSX Snow Leopard. I've setup Samba using the Samba configuration GUI tool on Fedora. Added users to Fedora and connected them as Samba users (which are the same as the Windows and Mac usernames and passwords). The workgroup name is the same as the Windows workgroup. Authentication is set to User. I've allowed Samba and Samba client through the firewall and set the ethernet to a trusted port in the firewall. Both the Windows and Mac machines can connect to the server and view the shares, however when trying to access the shares, Windows throws error: 0x80070035 " Windows cannot access \\SERVERNAME\ShareName." Windows user is not prompted for a username or password when accessing the server (found under "Network Places"). This also happens when connecting with the IP rather than the server name. The Mac can also connect to the server and see the shares but when choosing a share gives the error: The original item for ShareName cannot be found. When connecting via IP, the Mac user is prompted for username and password, which when authenticated gives a list of shares, however when choosing a share to connect to, the error is displayed and the user cannot access the share. Since both machines are acting similarly when trying to access the shares, I assume it is an issue with how Samba is configured. smb.conf: [global] workgroup = workgroup server string = Server log file = /var/log/samba/log.%m max log size = 50 security = user load printers = yes cups options = raw printcap name = lpstat printing = cups [homes] comment = Home Directories browseable = no writable = yes [printers] comment = All Printers path = /var/spool/samba browseable = yes printable = yes [FileServ] comment = FileShare path = /media/FileServ read only = no browseable = yes valid users = user1, user2 [webdev] comment = Web development path = /var/www/html/webdev read only = no browseable = yes valid users = user1 How do I get samba sharing working? UPDATE: I Figured it out, it was because I was sharing a second hard drive. See checked answer below. Speculation 1: Before this box I had another box with the same version of fedora installed (16) and samba working for these same computers. I started up the old machine and copied the smb.conf file from the old machine to the new one (editing the share definitions for the new shares of course) and I still get the same errors on both client machines. The only difference in environment is the hardware and the router. On the old machine the router received a dynamic public IP and assigned dynamic private IPs to each device on the network while the new machine is connected to a router that has a static public IP (still dynamic internal IPs though.) Could either one of these be affecting Samba? Speculation 2: As the directory I am trying to share is actually an entire internal disk, I have tried these things: 1.) changing the owner of the mounted disk from root to my user (which is the same username as on the Windows machine) 2.) made a share that only included one of the folders on the disk instead of the entire disk with my user again as the owner. Both tests failed giving me the same errors regarding the network address. Speculation 3: Whenever I try to connect to the share on the Windows 7 client I am prompted for my username and password. When I enter the correct credentials I get an access denied message. However I did notice that under the login box "domain: WINDOWS-PC-NAME" is listed. I believe this could very well be the problem. Speculation 4: So I've completely reinstalled Fedora and Samba now. I've created a share on the first harddrive (one fedora is installed on) and I can access that fine from Windows. However when I try to share any data on the second disk, I am receiving the same error. This I believe is the problem. I think I need to change some things in fstab or fdisk or something. Speculation 5: So in fstab I mapped the drive to automount in a folder which works correctly. I also added the samba_share_t SElinux label to the mountpoint directory which now allows me to access the shares on the Windows machine, however I cannot see any of the files in the directory on the windows machine. (They are there, I can see them in the fedora file browser locally)

    Read the article

  • Restoring exchange 2003 from a backup

    - by user64204
    Hi all, I'm restoring an Exchange server from a backup: [1] the backup was created on 19/12/2010 [2] the server kept running until 20/12/2010 [3] we're restoring the server today 21/12/2010 with the backup from [1] My understanding is that when the server comes back: [4] whatever is in users' inbox since [1] will be deleted. [5] whatever is in users' sent box since [2] should be re-sent. [6] As a safety measure we've moved all emails sent/received between [1] and [3] to .PST files. Questions: -are [4] & [5] statements correct? -is there any way to move back emails from the PST file [6] to the current inbox/sent folders so that Exchange takes these emails into account (instead of deleting them)? -what happens to the Calendar items that were added after [1], is there any way to back those up as well if needed? Many thanks

    Read the article

  • How to restrict deletion of a folder on NTFS share, but still allow modify access within folder

    - by thinkdreams
    I am setting up a set of scan folders from a scanning copier device, and would like to know the best way to protect the folders (for each department) from moving or deletion, but yet still allow access for the users to modify (i.e. create/add/delete) the scanned files within the folder. Structure is: Share Name Departmental Folder User files The writing of the files initially is taken care of by a service account which has full control. We'd just like to ensure the users cannot accidentally delete the folder (which has already happened) containing all the files, etc. This is for a Windows 2003 server, NTFS permissions. Suggestions would be most appreciated.

    Read the article

  • I want to increase the size of my boot partition (Ubuntu 14.04 version) [duplicate]

    - by Mike
    This question already has an answer here: How do I free up more space in /boot? 11 answers How to resize partitions? 5 answers I read in another post that kernels are distributed as new releases rather than upgrades. I didn't know this when I was allocating space to my partitions during my initial install of Ubuntu. As a result I ran out of space on my boot partition. Can I increase the size of it using GParted and how do I do this without doing damage to my system? 1 1049kB 512MB 511MB fat32 boot 2 512MB 768MB 256MB ext2 3 768MB 1000GB 999GB lvm Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/ubuntu--vg-swap_1: 3712MB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 3712MB 3712MB linux-swap(v1) Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/ubuntu--vg-root: 996GB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 996GB 996GB ext4 Sorry, don't know how to capture and post the terminal output screen.

    Read the article

  • How do I know if I need a level 3 switch?

    - by eekmeter
    We currently have a flat network with a bunch of unmanaged switches. I would like to use VLANs to segregate certain users like guests and I would like to use 802.1x. However, I'm not sure if what I need is a level 3 or a level 2 switch. From what I understand a level 3 switch does routing between VLANs. I don't think I need this at the moment but as I said I'm not sure since this is all new to me. What else would a level 3 switch do for me? Our network is relatively small, less than a 100 users. What exactly does a level 3 switch do that I can't get with a level 2 switch? When would I need a level 3?

    Read the article

  • Checking collision of bullets and Asteroids

    - by Moaz ELdeen
    I'm trying to detect collision between two list of bullets and asteroids. The code works fine, but when the bullet intersects with an asteroid, and that bullet passes through another asteroid, the code gives an assertion, and it says about it can't increment the iterator. I'm sure there is a small bug in that code, but I can't find it. for (list<Bullet>::iterator itr_bullet = ship.m_Bullets.begin(); itr_bullet!=ship.m_Bullets.end();) { for (list<Asteroid>::iterator itr_astroid = asteroids.begin(); itr_astroid!=asteroids.end(); itr_astroid++) { if(checkCollision(itr_bullet->getCenter(),itr_astroid->getCenter(), itr_bullet->getRadius(), itr_astroid->getRadius())) { itr_astroid = asteroids.erase(itr_astroid); } } itr_bullet++; }

    Read the article

  • How to morph from a programmer noob to a guru?

    - by didxga
    I have been a programmer for two years, and i am getting hard to level up my skill especially working at legacy code maintenance right now. I think working hard is not enough to elevate my skill, because there are ton of opensource around us, the preoject i have been involved are all mixture of opensources --- from front end to back end from presentation tier to business logic tier. My work is just gluing all these together or something fewer complex which is to collect data from UI to logic module then retrieve the data processed and put it to UI. Sometime there is a need to add some simple logic(like assembling the data to a form that fit business logic interface) while transport data. Could you please give me any suggestion what should i do on the side to improve my skill? Thanks!

    Read the article

  • Exchange 07 to 07 mailbox migration using local continuous replication

    - by tacos_tacos_tacos
    I have an existing Exchange Server ex0 and a fresh Exchange Server ex1, both 2007SP3. The servers are in different sites so users cannot access mailboxes on ex1 as from my understanding, a standalone CAS is required for this. I am thinking of doing the following: Enable local continous replication of the storage group on ex0 to a mapped drive that points to the corresponding storage group folder on ex1 At some point when the replication is done (small number of users and volume of mail), say on a late night on the weekend, disable CAS on ex0 (or otherwise redirect requests on the server-side from ex0 to ex1) AND change the public DNS name of the CAS so that it points to ex1. Will my plan work? If not, please explain what I can do to fix it.

    Read the article

  • An international mobile app - Should I set up EC2 instances in multiple regions?

    - by ashiina
    I am currently trying to launch an mobile app for users around the world. It is not a spectacular launch which will get millions of users in weeks - just another individual developer releasing an app. I know enough about the techniques of managing timezones, internationalizing string, and what not ( the application layer ). But I cannot find any information on how I should manage my EC2 instances... Should I be setting up EC2 instances in different regions around the world? Is that a must-do, or is it an overkill? I'm aware that it's the ideal solution in terms of performance, but it becomes very tough managing servers in multiple regions. DB issues, AMI management, etc... I'd much rather NOT do so. So I would like to know the general best practice when launching an international app/website. Note: For static contents, I know it's better to use a CDN, so I'm planning on doing so.

    Read the article

  • Is it safe to set MySQL isolation to "Read Uncommitted" (dirty reads) for typical Web usage? Even with replication?

    - by Continuation
    I'm working on a website with typical CRUD web usage pattern: similar to blogs or forums where users create/update contents and other users read the content. Seems like it's OK to set the database's isolation level to "Read Uncommitted" (dirty reads) in this case. My understanding of the general drawback of "Read Uncommitted" is that a reader may read uncommitted data that will later be rollbacked. In a CRUD blog/forum usage pattern, will there ever be any rollback? And even if there is, is there any major problem with reading uncommitted data? Right now I'm not using any replication, but in the future if I want to use replication (row-based, not statement-based) will a "Read Uncommitted" isolation level prevent me from doing so? What do you think? Has anyone tried using "Read Uncommitted" on their RDBMS?

    Read the article

  • External drives show up in Nautilus/Computer even when they are unplugged.

    - by Testament
    I have two 1TB Seagate USB (sdc1 and sdd1) drives connected to an old PC running Fedora 11 without an X server running. Since sdc1 and sdd1 change depending on the order in which they are plugged in, I decided to mount them using their UUID instead. These are my fstab entries UUID=d1b28578-451b-4f03-af28-2e8a6d5b7efb /media/Seagate ext3 defaults,rw,auto,users UUID=36bf5df4-934e-42d4-9e25-16a13971509c /media/Projects ext3 defaults,rw,auto,users They work fine, but when I unmount them and unplug the USB drives, they still show up in Nautilus (I'm running nautilus with X11 forwarding to an Ubuntu machine, btw). Now if I remove those entries from fstab, the drives disappear from Computer. If I add the entries back, they show up as an unmounted drive even when the drive is not plugged in. How do I do this so they don't show up when they're not plugged in?

    Read the article

  • How to restrict deletion of a folder on NTFS share, but still allow modify access within folder

    - by thinkdreams
    I am setting up a set of scan folders from a scanning copier device, and would like to know the best way to protect the folders (for each department) from moving or deletion, but yet still allow access for the users to modify (i.e. create/add/delete) the scanned files within the folder. Structure is: Share Name Departmental Folder User files The writing of the files initially is taken care of by a service account which has full control. We'd just like to ensure the users cannot accidentally delete the folder (which has already happened) containing all the files, etc. This is for a Windows 2003 server, NTFS permissions. Suggestions would be most appreciated.

    Read the article

  • Architectural advice - web camera remote access

    - by Alan Hollis
    I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely. The current flow of data is as follows: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server. On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server. This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach. My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive. The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice. My current thought is a simple Nginx/PHP/Redis stack. Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis. The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory. The data flow would then become: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can On image url request: Server reads latest image from redis Server tells redis to delete later image My questions are: Are there any greater overheads of transferring images via HTTP instead of FTP? Is there a simple way to calculate how many potential cameras we could have streaming at once? Is there any way to prevent potentially DOS'ing our own servers due to web camera requests? Is Redis a good solution to this problem? Should I abandon PHP/Ngix combination and go for something else? Is this proposed solution actually any good? Will adding HTTPs to the mix cause posting the image to become too slow? Thanks in advance Alan

    Read the article

  • How to sync (or at least view) public / team / shared calendar to Blackberry using BES?

    - by 3rdparty
    Trying to allow 3 people to view and ideally sync (create/edit) common (team) calendar events via Blackberry and hosted Exchange 2007 BES. My understanding is that BES does not support anything other than a users primary calendar to be synced wirelessly. From what I've researched the only supported workflow is for user to create event in public calendar on Outlook and then invites team members individually as optional attendees so event displays in their calendar (and on their Blackberry). I've seen some 3rd party utilities that claim to support syncing of public folders/calendars: Add2Outlook: http://www.diditbetter.com/add2outlook.aspx WICKSoft: http://www.wicksoft.com/contacts_calendars.htm (needs to be installed on local Exchange server) I've also been told I can sync public/other calendars using Desktop Manager, but I need to avoid any tethered sync with this environment. Am I missing an easier workflow here? There must be tens of thousands of BES users that require the ability to/view share a public, shared or team calendar on their Blackberry. How can I solve this?

    Read the article

  • External drives show up in Nautilus/Computer even when they are unplugged.

    - by Testament
    I have two 1TB Seagate USB (sdc1 and sdd1) drives connected to an old PC without an X server running. Since sdc1 and sdd1 change depending on the order in which they are plugged in, I decided to mount them using their UUID instead. These are my fstab entries UUID=d1b28578-451b-4f03-af28-2e8a6d5b7efb /media/Seagate ext3 defaults,rw,auto,users UUID=36bf5df4-934e-42d4-9e25-16a13971509c /media/Projects ext3 defaults,rw,auto,users They work fine, but when I unmount them and unplug the USB drives, they still show up in Nautilus (I'm running nautilus with X11 forwarding onto another Ubuntu machine, btw). Now if I remove those entries from fstab, the drives disappear from Computer. If I add the entries back, they show up as an unmounted drive even when the drive is not plugged in. How do I do this so they don't show up when they're not plugged in?

    Read the article

  • When one DC crashes, TFS 2012 stops working

    - by blizz
    We have two Windows 2008 domain controllers. We installed the second DC only a few months ago. We also have a TFS 2012 server on the network. Today, when the older DC crashed, TFS stopped working completely. Local users received messages such as "You are not authorized to access ServerName\Collection". Remote users received messages such as "The server was used in your last session, but it might be offline or unreachable". So my question is, why did TFS not use the second, newer DC instead of just crashing along with the first DC?

    Read the article

  • Group Policy for Setting Passwords: Server 2003 Domain

    - by user1236435
    In my 2003 domain, I am being requested to set a password policy to require passwords to expire every 4 months, and also require users to change their password on their next login, due to a security issue. In my domain, my OU's are setup by location, then drilled down to city, then the users and computers are in separate sub-domains. My question is, how do I set this up for my domain? Will I need to set the policy up for loop back? Can I configure this for just a specific OU? Any suggestions on how to move forward? Any advise is much appreciated, and thanks in advance!

    Read the article

  • Windows 8 Remote Desktop only allows one user at a time?

    - by segmentation fault
    I tried connecting to Windows 8 using its built-in Remote Desktop feature, but for some inexplicable reason, it requires that no users are logged in on the target machine before a remote user can log in. This has never been a problem with rdesktop on Unixen; I could rdesktop from as many machines as I wanted and any logged-in users would never notice a thing. What's the problem with Windows? Any way to allow concurrent local and remote logins to a Windows 8 machine without hacks or cracks? The "guides" on how to do this that show up in the Google results all suggest replacing a system DLL with a hacked one, but that's not acceptable.

    Read the article

  • Access Control issue

    - by user160605
    Ok this is stumping me mainly because of the lack of experience I have with access control. I have two folders I need to keep away from users. Payroll and Banking. I went into security and took away all the users. I made a new group called access granted and added it to both folders. I then gave full control to the group. I then added a few days to this group. I tested with partial success. I can only get into some folders and subfolders/files. I made sure I clicked on the option for all subfolders. This is my layout C:(folder) -- permissions granted to admin,access (full control) when I look at the problem files/folders no one has any permissions I don't even see the group or admin. what am I doing wrong. Thanks

    Read the article

  • WiFi problems on several Ubuntu installations

    - by Rickyfresh
    Okay this is the first time I have ever had to ask a question as usually the Ubuntu community have answered everything already but on this occasion there are many people asking for the answer but not one good solution has become available so far so someone please help or I will have to install Windows on my sons and my girlfriends PCs and that would be a disaster as I am trying to help convince people to move from Windows. I installed 12.04 on three computers on the same day. Dell Inspiron (Works Perfect) Toshiba Satellite Home built Desktop The Dell works perfect but the other two either keep losing connection to the wireless Internet and even when they are connected they stop connecting to web sites, for some reason it searches Google fine but will not connect to web sites when a link is clicked. So far people have recommended in other forums: Removing network manager and installing wicd (didn't solve it) Changing the MTU in the wireless settings (didn't solve it) All sorts of messing about with Firefox settings (this doesn't solve it and even if it did this would leave most average PC users scratching their heads and wishing they had stuck to windows) The problem exists on two very different machines and different wireless cards so I doubt its a driver or hardware issue, also many other Ubuntu users are having the same problem with a vast array of different machines and wireless cards. Can someone please give a good solution to this as its going to turn a lot of people away from Ubuntu if they cannot get this sorted. I would give some PC specs but the two machines are vastly different and the other people complaining of this problem also have very different systems all showing the same problem.

    Read the article

  • How to share datastores between multiple exchange servers?

    - by Johan
    I have an Exchange 2003 box that is seriously overstressed. I want to transfer its duties to a new and faster box. I don't cannot suffer downtime, so I have to do this stuff live. Here's what I plan to do: Install Exchange 2003 on the new server Set up the new server, so it will accept requests from users for their mailboxes I want to do as little manual set up as possible, because that 'll eat up my time and is too error prone Than I want to transfer my datastores one by one to the new server and have those users (once the datastore in the new server is up and running) to get their data from the new server (without them noticing) I don't have to transfer all the datastores, some of them need to stay on the old box (because I'm still waiting for extra HD space to arrive from the supplier) What steps do I need to follow to do this? The new box has never seen this domain before, the old exchange server is not the DC, we have a dedicated DC.

    Read the article

  • How Do I Parse a String?

    - by Russ
    I am new to bash, and I am creating a script that loops through the files in a directory and based on part of the filename, does something with the file, so far I have this: #!/bin/bash DIR="/Users/me/Documents/import/*" for f in "$DIR" do $t=?????? echo "Loading $f int $t..." done so $f will output something like this: /Users/me/Documents/import/time_dim-1272037430173 out of this, I want time_dim, the directory can be variable length and -1272037430173 is a fixed length (it's the unix timestamp btw). What is the best way to go about this?

    Read the article

  • WSS 3.0 fails to hide quick launch items for which the current user does not have access

    - by Nils
    I'm running a Small Business Server 2008 with Windows Sharepoint Services 3.0 (WSS 3.0). I thought WSS was supposed to hide menu items for which the current logged in user don't have access? Apparently, all users can see all links, regardless of whether they have access. This applies to both links to newly created sub-sites as well as document libraries/lists. Is this expected behaviour, or is there a misconfiguration somewhere that causes the links to stay visible even for users without access? Thanks!

    Read the article

  • Win 2003 Junction Point to Remote Unix Share

    - by Pogrindis
    Env : Windows Server 2003 with already established shared folders over the local Domain via Windows DC and AD. - Linux box being used as a fileserver with the folder /files/share being R+W by all domain users, this is not a problem. I have already transfered the files from the Windows Box to the /files/share on the Linux Box however i now want to create a junction point in order to prevent users saving to the Windows box. I have tried the FileServer Administration on windows server 2003 however it will not allow me to junction remote servers. I have tried mounting the remote filesystem as a drive and proceeding that way however no joy. Anyone have any suggestions ?

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >