Search Results

Search found 2838 results on 114 pages for 'considered harmful'.

Page 77/114 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • JCP.next.3: time to get to work

    - by Patrick Curran
    As I've previously reported in this blog, we planned three JSRs to improve the JCP’s processes and to meet our members’ expectations for change. The first - JCP.next.1, or more formally JSR 348: Towards a new version of the Java Community Process - was completed in October 2011. This focused on a small number of simple but important changes to make our process more transparent and to enable broader participation. We're already seeing the benefits of these changes as new and existing JSRs adopt the new requirements. However, because we wanted to complete this JSR quickly we deliberately postponed a number of more complex items, including everything that would require modifying the JSPA (the legal agreement that members sign when they join the organization) to a follow-on JSR. The second JSR (JSR 355: JCP Executive Committee Merge) is in progress now and will complete later this year. This JSR is even simpler than the first, and is focused solely on merging the two Executive Committees into one for greater efficiency and to encourage synergies between the Java ME and Java SE platforms. Continuing the momentum to move Java and the JCP forward we have just filed the third JSR (JCP.next.3) as JSR 358: A major revision of the Java Community Process. This JSR will modify the JSPA as well as the Process Document, and will tackle a large number of complex issues, many of them postponed from JSR 348. For these reasons we expect to spend a considerable amount of time working on it - at least a year, and probably more. The current version of the JSPA was created back in 2002, although some minor changes were introduced in 2005. Since then the organization and the environment in which we operate have changed significantly, and it is now time to revise our processes to ensure that they meet our current needs. We have a long list of topics to be considered, including the role of independent implementations (those not derived from the Reference Implementation), licensing and open source, ensuring that our new transparency requirements are implemented correctly, compatibility policy and TCKs, the role of individual members, patent policy, and IP flow. The Expert Group for JSR 358, as with all process-change JSRs, consists of all members of the Executive Committees. Even though the JSR has just been filed we started discussions on the various topics several months ago (see the EC's meeting minutes for details) and our EC members - including the new members who joined within the last year or two - are actively engaged. Now it's your opportunity to get involved. As required by version 2.8 of our Process (introduced with JSR 348) we will conduct all our business in the open. We have a public java.net project where you can follow and participate in our work. All of our deliberations will be copied to a public Observer mailing list, we'll track our issues on a public Issue Tracker, and all our documents (meeting agendas and minutes, task lists, working drafts) will be published in our Document Archive. We're just getting started, but we do want your input. Please visit us on java.net where you can learn how to participate. Let's get to work...

    Read the article

  • openGL textures in bitmap mode

    - by evenex_code
    For reasons detailed here I need to texture a quad using a bitmap (as in, 1 bit per pixel, not an 8-bit pixmap). Right now I have a bitmap stored in an on-device buffer, and am mounting it like so: glBindBuffer(GL_PIXEL_UNPACK_BUFFER, BFR.G[(T+1)%2]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, W, H, 0, GL_COLOR_INDEX, GL_BITMAP, 0); The OpenGL spec has this to say about glTexImage2D: "If type is GL_BITMAP, the data is considered as a string of unsigned bytes (and format must be GL_COLOR_INDEX). Each data byte is treated as eight 1-bit elements..." Judging by the spec, each bit in my buffer should correspond to a single pixel. However, the following experiments show that, for whatever reason, it doesn't work as advertised: 1) When I build my texture, I write to the buffer in 32-bit chunks. From the wording of the spec, it is reasonable to assume that writing 0x00000001 for each value would result in a texture with 1-px-wide vertical bars with 31-wide spaces between them. However, it appears blank. 2) Next, I write with 0x000000FF. By my apparently flawed understanding of the bitmap mode, I would expect that this should produce 8-wide bars with 24-wide spaces between them. Instead, it produces a white 1-px-wide bar. 3) 0x55555555 = 1010101010101010101010101010101, therefore writing this value ought to create 1-wide vertical stripes with 1 pixel spacing. However, it creates a solid gray color. 4) Using my original 8-bit pixmap in GL_BITMAP mode produces the correct animation. I have reached the conclusion that, even in GL_BITMAP mode, the texturer is still interpreting 8-bits as 1 element, despite what the spec seems to suggest. The fact that I can generate a gray color (while I was expecting that I was working in two-tone), as well as the fact that my original 8-bit pixmap generates the correct picture, support this conclusion. Questions: 1) Am I missing some kind of prerequisite call (perhaps for setting a stride length or pack alignment or something) that will signal to the texturer to treat each byte as 8-elements, as it suggests in the spec? 2) Or does it simply not work because modern hardware does not support it? (I have read that GL_BITMAP mode was deprecated in 3.3, I am however forcing a 3.0 context.) 3) Am I better off unpacking the bitmap into a pixmap using a shader? This is a far more roundabout solution than I was hoping for but I suppose there is no such thing as a free lunch.

    Read the article

  • Laptop freezes and seems to crash, but continues working after waiting for a few minutes [closed]

    - by Corwin
    I've had this old notebook laying around and because i was missing a second machine (My wife usually steals the first ;) ) I considered installing Linux. As a php developer I work with Linux servers (usually fedora) on a daily basis and because its an older machine that I want to use for development, linux seemed the best option. Speedwise I expected a good experience, better than Windows 7 on the same machine. The results where terrible. I tried ubuntu 12.04. The shell never got past showing the background. The system doesn't freeze since the mouse still works and I can use ctrl+alt+f2 etc to enter terminal mode. I expected hardware problems en even exchanged the harddisk en Ram memory. No luck though, so I started over and tried 11.10 Same results so I tried 10.04.4 which did install properly. Not sure if unity was the problem, but it seems likely. But then I tried simply things like surfing on the net, the system frooze and I thought it crashed so after a few minutes I pulled the plug and rebooted. But it happened again and I waited. After a few minutes the system came back to life like nothing happened. Long story short. Besides the fact that the entire interface is very sluggish, any and all graphical functions freezes the system. The more elaborate the animation would be, the longer it freezes. I switch chromium from window to fullscreenmode and had to wait 15 minutes to continue. I don't see the animation that's probably supposed to be in between. It just freezes and then after unfreezing its fullscreen. I don't think its a bug. I suspect the problem is with my graphics card. Like I said, its and old system. So old that I can't even find the original Ati drivers anywere. (I'll post the details of my system at the end of my post) I'm at a loss as to what to do next. I tried other Distro's. So far only dreamlinux works normally. Linux Mint won't start as a live CD. I think I simply need a driver update but I can't find them anywhere. Does anyone have the same experience ? Maybe even someone who has or had the same notebook running Ubuntu at some point ? Anyway, here are the specs: http://www.nec-driver.com/nec-driver/NEC-Versa-P550---FP550-Driver_421.html

    Read the article

  • Is the Observer pattern adequate for this kind of scenario?

    - by Omega
    I'm creating a simple game development framework with Ruby. There is a node system. A node is a game entity, and it has position. It can have children nodes (and one parent node). Children are always drawn relatively to their parent. Nodes have a @position field. Anyone can modify it. When such position is modified, the node must update its children accordingly to properly draw them relatively to it. @position contains a Point instance (a class with x and y properties, plus some other useful methods). I need to know when a node's @position's state changes, so I can tell the node to update its children. This is easy if the programmer does something like this: @node.position = Point.new(300,300) Because it is equivalent to calling this: # Code in the Node class def position=(newValue) @position = newValue update_my_children # <--- I know that the position changed end But, I'm lost when this happens: @node.position.x = 300 The only one that knows that the position changed is the Point instance stored in the @position property of the node. But I need the node to be notified! It was at this point that I considered the Observer pattern. Basically, Point is now observable. When a node's position property is given a new Point instance (through the assignment operator), it will stop observing the previous Point it had (if any), and start observing the new one. When a Point instance gets a state change, all observers (the node owning it) will be notified, so now my node can update its children when the position changes. A problem is when this happens: @someNode.position = @anotherNode.position This means that two nodes are observing the same point. If I change one of the node's position, the other would change as well. To fix this, when a position is assigned, I plan to create a new Point instance, copy the passed argument's x and y, and store my newly created point instead of storing the passed one. Another problem I fear is this: somePoint = @node.position somePoint.x = 500 This would, technically, modify @node's position. I'm not sure if anyone would be expecting that behavior. I'm under the impression that people see Point as some kind of primitive rather than an actual object. Is this approach even reasonable? Reasons I'm feeling skeptical: I've heard that the Observer pattern should be used with, well, many observers. Technically, in this scenario there is only one observer at a time. When assigning a node's position as another's (@someNode.position = @anotherNode.position), where I create a whole new instance rather than storing the passed point, it feels hackish, or even inefficient.

    Read the article

  • Postgres - could not create any TCP/IP sockets

    - by Jacka
    I'm running a rails app in development with postgresql 9.3. When I tried to start passenger server today, I got: PG::ConnectionBad - could not connect to server: Connection refused Is the server running on host "localhost" (217.74.65.145) and accepting TCP/IP connections on port 5432? No big deal I thought, that happened before. Restarting postgres always solved the problem. So I ran sudo service postgresql restart and got: * Restarting PostgreSQL 9.3 database server * The PostgreSQL server failed to start. Please check the log output: 2014-06-11 10:32:41 CEST LOG: could not bind IPv4 socket: Cannot assign requested address 2014-06-11 10:32:41 CEST HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry. 2014-06-11 10:32:41 CEST WARNING: could not create listen socket for "localhost" 2014-06-11 10:32:41 CEST FATAL: could not create any TCP/IP sockets ...fail! My postgresql.conf points to the defaults: localhost and port 5432. I tried changing the port but the error message is the same (except the port change). Both ps aux | grep postgresql and ps aux | grep postmaster return nothing. EDIT: In postgresql.conf I changed listen_addresses to 127.0.0.1 instead of localhost and it did the trick, server restarted. I also had to edit my applications' db config and point to 127.0.0.1 instead of localhost. However, the question is now, why is localhost considered to be 217.74.65.145 and not 127.0.0.1? That's my /etc/hosts: 127.0.0.1 local 127.0.1.1 jacek-X501A1 127.0.0.1 something.name.non.example.com 127.0.0.1 company.something.name.non.example.com

    Read the article

  • Filesystems for webserver with SATA and Solid State disk,

    - by Jorisslob
    We have just ordered a new webserver with 120 Gb solid state disk and a SATA disk. I am trying to plan ahead what sort of filesystem to use. This system will be running Linux, Apache/Tomcat to host java services. The main service is a system where people can upload reasonably large files (in the order of 100 Mb, images, image stacks and video), which people will be able to annotate and which will be sent to a database server when annotation is complete. Thus far, I plan to put most of the utility programs of the operating system om the SSD and put the large media files there. The SATA disks will hold the less volitile data like apache, tomcat and the servlets. For filesystems I have considered going for the stable EXT3 because I hear that it is best supported. The downside seems to be that it not the ideal choice for large files. That is why I am leaning towards using XFS for the SSD and EXT3 for the SATA. My questions are: 1) Does this sound like a reasonable setup? 2) What filesystems would you recommend for the SSD and for the SATA? Thanks

    Read the article

  • Vagrant synced folders aren't case sensitive

    - by lvmisooners
    For our web stack, we are moving from a Windows Server to CentOS. To facilitate development, we're utilizing Vagrant to run CentOS VMs locally. We're using Vagrant's Synced Folders feature to allow devs to use their favorite IDEs on their host machine, but we're finding that one key feature is missing from this setup: file system case sensitivity. The synced folder inside the VM apparently takes on the properties of the host's file system, so if I'm developing from a Windows machine, or even OSX, the file system isn't case sensitive. This is a big issue, as our production servers will be pure CentOS, and its file system will be case sensitive. Case sensitivity is one of the main reasons we wanted to have a local VM. We want to prevent "It works on my machine!" Some workarounds we've considered or tried: Use lsyncd to sync from the vagrant share to a location within the VM that is case sensitive updating files on the host doesn't seem to generate the events in the VM that lsync listens to Make a case-sensitive partition on the host (Doesn't work for Windows) Use samba this may be an option, but we haven't vetted it yet. Is there a better way? Note that we have developers using Windows, OS X, and Ubuntu, and the solution needs to work everywhere.

    Read the article

  • How to easily input and display isolated japanese radicals on MacOS X?

    - by ogerard
    (This question was considered as off-topic on Japanese.SE and more suitable for SuperUser). I like to write computer notes about what I learn in Japanese. From time to time, I would like to be able to include in my text a given radical, say kokoro ?, which takes several graphic forms when used as an element in a more complex kanji, for instance . I did not succeed on my system (Mac OS X 10.7) to find the glyphs for these variants conveniently and exactly as I would like them (I would also be interested about how to do this on Windows 7 or Linux). I first tried the name of the kanji from which the radical is derived. Then I tried to use the japanese name for them, such as risshinben (as in ?) and shitagokoro (as in ?), hoping that the hiragana or katakana input would recognize them and propose me their representation, but it did not work. So I looked into the Full Japanese Character Table, under the "by radical" tab, and found at least a version of each of them : ? (CJK 5FC4) and ? (CJK 38FA) with the correct kun readings. I have them now as favorites but do I need to do that for all radicals? Do I need to register all of them in a user dictionary? I would imagine that I am not the only one who wants to do use them. Besides, the versions I have found are not suited for all occasions: they are centered on a standard kanji square. If I want them to appear near to a placeholder, or demonstrate their proportion to the rest of a typical kanji, I have to make complicated adjustments, depending on my use and the kind of radical. More generally are there computer tools for japanese dictionary editors and japanese teachers I could use on Mac OS X? (I could not add relevant tags such as : japanese or ideogram, please feel free to edit)

    Read the article

  • Enabling syntax highlighting for LESS in Programmer's Notepad?

    - by Cody Gray
    When I don't feel like firing up the Visual Studio behemoth, or when I don't have it installed, I always turn to Programmer's Notepad. It's an amazingly light and fast little text editor, with the special advantage that it is completely platform-native and conforms to standard UI conventions. Therefore, please do not suggest that I consider using other text editors. I've already considered and rejected them because they do not use native UI controls. I like Programmer's Notepad, thank you very much. Unfortunately, I've recently begun to learn, use, and love LESS for all of my CSS coding needs, and it appears that Programmer's Notepad is not bundled with a syntax highlighting scheme for LESS. Does anyone know if there is—by chance and good fortune—one already available somewhere on the web that some kind soul has tediously prepared? If not, how can I go about writing one of my own? Is there a way to build on the existing CSS scheme? It's also possible that any code coloring scheme designed for Scintilla-based editors will work, as Programmer's Notepad is based on the Scintilla control. If you know of a LESS highlighting scheme for Scintilla-based editors, and how to use that with Programmer's Notepad, please suggest that as well.

    Read the article

  • GNU screen, how to get current sessionname programmably

    - by Jimm Chen
    [ This can be considered step 2 of my previous question Is it possible to change GNU screen session name after created? ] Actually, I'd like to write a script that can display current screen session name and change current session name. For example: sren armcross It will change the session name to armcross (ARM gcc cross compiler) and output something like: screen session name changed from '25278.pts-15.linux-ic37' to 'armcross' So, the key question now is how to get current session name. Not only for display the old session name, but according to Is it possible to change GNU screen session name after created? , I have to know it(pass to -d -r) before I can change it to something else. Can we use $STY for current session name? No. $STY will not change after you have changed the session name to a user-defined one. However, for command screen -d -r <oldsessname> -X sessionname armcross should be the user-defined name(if ever defined) instead of $STY, otherwise, screen spouts error "No screen session found." Maybe, there is a verbose way, use screen -list to list all sessions(user-defined name listed), then, match the pid part from $STY against those listed sessions and we will find current session's user-defined name. It should not be so verbose for such a straightforward question. Don't you think so? The -d -D and -r -R options seems to expose too much implementation detail to screen's user. It seems, to rename a session, you have to detach it, then do the rename, then reattach it. Right? My env: opensuse 11.3, GNU screen 4.00.03 (FAU) 23-Oct-06 Thank you.

    Read the article

  • Install Linux Mint on PC without bootable CD

    - by crosenblum
    Unfortunately my PC's CD drive is not bootable; I have such a mixture of SATA and IDE drives, so until I have more money to redo my controller setup, I can't boot from any cd. Currently, I have a DVD burned with latest version of Linux Mint, and I have an USB drive with an old version of Mint. I have a partition ready to install Linx Mint into, but no idea how to install it, since I can only boot to my hard drive. I am totally unable to boot to CD, so that is definitely out. My main partition is WinXP Pro SP3. Is there software I can use to format my Linux partition, so that I can then just copy Mint over to that partition? Or is there a better way to install linux mint? I have to do it within Windows XP, since that's all that I can boot right now. I have considered Mint4Win, but that doesn't allow a full installation of Linux Mint. Any ideas?

    Read the article

  • Distributed development staff needing a common IP range

    - by bakasan
    I work on a development staff that is geographically distributed, mostly all throughout the state of CA, but several key members also must travel frequently. We rely quite heavily on a 3rd party provider API for a great deal of our subsystems (can't get into who it is or what they do). The 3rd party however is quite stringent on network access and have no notion of a development sandbox. Access is restricted to 2, 3 IP numbers and that's about it. Once we account for our production servers, that leaves us with an IP or two to spare for our dev team--which is still problematic as people's home IP changes, people travel, we have more than 2 devs, etc. Wide IP blocks are not permitted by the 3rd party. Nor will they allow dynamic DNS type services. There is no simple console to swap IPs on the fly either (e.g. if a dev's IP at home changes or they are on the road). As none of us are deep network experts, I'm wondering what our viable options are? Are there such things as 3rd party hosts to VPNs? Generally I think of a VPN as a mechanism to gain access to a home office, but the notion would be a 3rd party VPN that we'd all connect to and we'd register this as an IP origin w/ our 3rd party. We've considered using Amazon EC2 to effectively host a dev environment for each dev and using that to connect. Amazon only gives you so many static IPs however (I believe 5?) so this would only be a stop gap solution until our team size out strips our IP count at Amazon. Those were the only viable thoughts that I had, but again, I'm far from a networking guy. Tried searching for similar threads, but I'm not even sure I know the right vernacular to look around for.

    Read the article

  • Two hosting providers running simultaneously... possible / not possible? good practice / unnecessary?

    - by user29600
    For the sake of their reputation, I won't mention the names. But I'll just use: Business I worked for previously - ABC Web Dev Hosting company they used - XYZ Hosting I recently found out that XYZ Hosting had some sort of incident where they ended up losing a lot of their client's data - including ABC Web Dev's. ABC Web Dev was able to recover some of their customer's websites, after pulling them from their local development computers and putting them up on another hosting provider. They ended up losing a lot of clients because of it and their reputation ruined. I'm starting my own web dev company and I don't want to run into this same issue. I'm planning on using Rackspace but, although they are a great company, according to wikipedia they still have had downtime in their past. I thought it might be a good idea to try to run two providers at once, to ensure that if anything happened in one the websites would still be live because of the other. I know the websites would have to be pulling from one server at all times, but if there's a way to redirect requests to the second server if the first one is down that would solve my issue. As a note, we will have a staging environment setup locally which will allow for quick recovery if a provider did have any issues, however I'd like to avoid any downtime at all if possible. So my questions are: Has anyone tried running two providers simultaneously? Would this be considered good practice or am I going too far? Is there really any way to run two simultaneously where one server acts as a backup?

    Read the article

  • Any ideas out there as to how the data can be recovered from an SSD?

    - by ben
    A friend had some form of catastrophic failure on an HP mini 1000, unbootable. Of course there was data that wasn't backed up. I've removed the SSD and hooked it up to a ZIF 40 enclosure but can not seem to get the drive to be recognized in Windows 7. In Disk Management it displays as present, but uninitialized. Attempting to initialize it presents an error Virtual Disk Manager - "The device is not ready". There is scant information on MIE (the custom OS), so I'm not even sure what kind of file system I'm dealing with. In any case, if the filesystem is indeed some flavor other than FAT or NTFS, is this error consistent with that? Are there any creative ideas out there as to how the data can be recovered? Update: Thanks for all the suggestions! I hadn't even considered running a live cd. Unfortunately no luck with Ubuntu (live cd) or explore2fs. The zif connection seems ok (color coded green led for proper connect, orange for not). The drive can't be initialized and therefore can't be formatted, so I guess there may be some real damage. Probably needs to head to a specialist. Thanks again for the feedback, much appreciated.

    Read the article

  • Automatically check for Security Updates on CentOS or Scientific Linux?

    - by Stefan Lasiewski
    We have machines running RedHat-based distros such as CentOS or Scientific Linux. We want the systems to automatically notify us if there are any known vulnerabilities to the installed packages. FreeBSD does this with the ports-mgmt/portaudit port. RedHat provides yum-plugin-security, which can check for vulnerabilities by their Bugzilla ID, CVE ID or advisory ID. In addition, Fedora recently started to support yum-plugin-security. I believe this was added in Fedora 16. Scientific Linux 6 did not support yum-plugin-security as of late 2011. It does ship with /etc/cron.daily/yum-autoupdate, which updates RPMs daily. I don't think this handles Security Updates only, however. CentOS does not support yum-plugin-security. I monitor the CentOS and Scientific Linux mailinglists for updates, but this is tedious and I want something which can be automated. For those of us who maintain CentOS and SL systems, are there any tools which can: Automatically (Progamatically, via cron) inform us if there are known vulnerabilities with my current RPMs. Optionally, automatically install the minimum upgrade required to address a security vulnerability, which would probably be yum update-minimal --security on the commandline? I have considered using yum-plugin-changelog to print out the changelog for each package, and then parse the output for certain strings. Are there any tools which do this already?

    Read the article

  • Windows 7, network connection with no default gateway: any way to change the "Unknown network" statu

    - by e-t172
    Hi, I have a computer running Windows 7 Pro RTM. This computer has two network connections: A Wi-fi connection to the Internet (through a home router) which works just fine. An OpenVPN virtual network connection. More precisely, this is a virtual Ethernet connection which behaves exactly like a physical Ethernet wired connection. My problem is that the "Network and sharing center" shows "Unknown network" for the OpenVPN connection. After some research I found that logical networks (outside a domain) are identified by the MAC address of the default gateway of the connection. Problem is, the OpenVPN connection has no default gateway: it is a private network, so I don't need one... Consequently, the "Unknown network" is always considered public, so the firewall is always in "public mode", which I don't want. Plus, I can't rename "Unknown connection" or anything (which makes sense), so it is kinda ugly. My goal is to define a proper logical network for the OpenVPN connection with the private profile. I know of some workarounds (disable the firewall, modify security policy to make all unknown networks "private") but they're still workarounds. I just want my clients to connect to the VPN without having to disable their firewall settings, without changing global configuration with potential side-effects (the "security policy" solution) and without having to look at an ugly "Unknown connection" in the Network and sharing center. Is there any way I can do this? I tried to check what was going on in the registry (HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\NetworkList is interesting), but I still didn't find a way to "force" the OpenVPN connection to be assigned to a logical network. Any help would be very appreciated. A related question showed up at Superuser: http://superuser.com/questions/37355/windows-7-cant-identify-network/37422

    Read the article

  • client flips between internal and external IP addresses??

    - by jmiller-miramontes
    I have what seems like a not-particularly-complicated home network, all things considered: a DSL line comes in to a modem/router, which goes off to a switch, which supports a bunch of machines. My machines live in a 192.168.0.x address space; however, I'm running some public servers on the network, so I have a block of 8 (5, really) static IP addresses that are mapped to the servers by the router. The non-servers get 192.168.0.x addresses via NAT; some machines have static addresses and some get addresses from DHCP. Locally, I'm running a DNS server (named) to map between the domain names and the 192.168 address space. Somewhat messy, but everything basically works. Except: One of my local non-server clients occasionally switches from its internal address to its external address. That is, if I check the logs of a website I'm running internally, the hits coming from this client sometimes show up with the internal 192.168 address, and sometimes with the external (216.103...) address. It will flip back and forth for no apparent reason, without my doing anything. This can be a problem in terms of how the clients interact with the way I have some of the clients' SSH systems configured (e.g., allowing access from the internal network but not the external network), but it also Just Seems Wrong. I will confess that I'm kinda skating on the very edge of my networking competence here, but I can't for the life of me figure out what's going on. If it helps, the client in question is running Mac OS X / 10.6; its address is statically assigned, is not one of the five externally-accessible addresses, and gets its DNS from (first) the internal DNS server and (second) my ISP's DNS servers. I can't swear that none of the other NAT clients are also showing this problem; the one I'm dealing with is my everyday machine, so this is where I run into it. Does anybody out there have any advice? This is driving me crazy...

    Read the article

  • PXE bootable image for terminal server?

    - by HeavenCore
    We have 300 windows xp machines on cruddy old hardware across the company. With extended support for XP ending April next year we're looking into our options. Couple of options: Replace the 300 PC's with full windows 7 PC's (£100k +?) - no use of terminal server (our current model) Replace the 300 PC's with off the shelf thin clients & make use of our terminal server - Cheaper clients but Terminal Server CALS required? Keep the 300 PC's, replace windows XP with linux thin client capable of connecting to our terminal server - no hardware costs, just Terminal Server CALS required? Keep the 300 PC's - remove hard drives and make use of a PXE bootable "thin client" to connect to our terminal server If we were to choose option 4, what our the options out there? Is there any official PXE bootable thin clients for terminal server out there? If so, what are the licence requirements? Is there options we haven’t considered? There must be lots of companies out there in this situation - curious what the current trend is for this problem? Edit: Option 5 - Create a bootable Windows PE image with RDP auto start and use that as a "thin client" for our terminal server - is Windows PE licence free in such a model?

    Read the article

  • Creating mdraid device on top of other existing mdraid devices

    - by Dmitriusan
    I'm considering creating something like "hierarchical raid" and wondering whether it is possible using pure mdraid. Moreover, I'm going to boot from this device. I'm using Ubuntu Server 12.04 LTS with Grub2 bootloader. Motivation behind doing that is: I have 4 x 1tb 7200rpm disks. Two are newer and faster (up to 200mb/sec) and other two are slower (up to 140mb/sec). I want to create RAID-0 device from them. When creating such RAID-0 directly from 4 hard disks, I get summary speed up to ~480mb/sec. That is roughly 4*120mb/sec, so RAID-0 works with speed of the slowest device. I have an idea to create a separate RAID-0 md0 device from 500gb partitions of slower hard disks. Theoretically, this md0 device will have speed 2*140=240~280mb/sec. After that, I'm going to add this md0 device to RAID-0 with faster disks, finishing with up to 3*200=600mb/sec. Stripe-width for this raid will be 2x times bigger than for underlying raid with slow disks. Questions are: is it possible or I'm missing something? will that work as expected? can I boot from such consolidated raid device? any better ideas? any pitfalls? I don't want to use fakeraid for consolidating slow disks for multiple reasons (portability, ability to customize parameters and so on). PS Speed is needed for home virtualization server and just for experience/fun. Reliability is provided via regular automatic backups to a separate device. PPS I considered also using different stripe-width for hard disks with different speed in single raid, but mdraid does not seem to support that.

    Read the article

  • Centralized Windows/Mac Patch Management that is easy to use

    - by BiggsTRC
    I'm looking for advice on what patch management solutions you would recommend based upon your experience. I'm also looking for which ones you would not recommend based upon your experience. We have a mixed network of Windows and Mac clients. Our central servers are all Windows servers, although I have considered putting in a Mac server to better handle our Mac clients. The issue we are facing currently is that we need to maintain the patches on all of our third-party applications. Right now we use WSUS, which handles with patching of Windows and some Microsoft products but that is about it. I need something to cover the other applications, specifically things like Adobe products (Reader, Flash, Dreamweaver, etc.) Our network isn't that big (maybe 200 clients) and I don't have a person to dedicate just to patching and maintaining a patch management solution. Thus very large and complicated solutions like System Center are most likely out. I have recently been looking at Dell's Kace K1000 solution (http://www.kace.com/products/systems-management-appliance/). It seems simple and it provides a lot of tools in one package that I would like/need as well. I like the fact that it is self-contained in an appliance and that it is designed for solutions like mine. However, I'm not sure if this is the best solution. I've also looked some at Shavlik's Netchk solution (http://www.shavlik.com/netchk-protect.aspx) but I don't need an anti-virus product. However, it looks like they might have a very good patch database. My question is this: What are your thoughts on these to products? Are there better products out there? Are there issues that I'm not considering? I want something that is very good at patching a broad range of products, that is simple to use, that takes a minimal amount of management (like WSUS), and that (hopefully) works with Mac and Windows.

    Read the article

  • Server Names Inside Private Network

    - by thyandrecardoso
    Our office has a private network, where any requests on a (pre-determined) public IP are forwarded to a private IP inside said network. On that private IP, we've got a server running several services, including HTTP servers, and SCM systems. We only control our private network, having no control on the public IP configuration. We bought a domain name, and pointed it to that public IP, so people can access our services from the outside. But, when inside the office, people can't use that DNS name, because the server and any other hosts inside the network share the same public IP! For desktops, inside the office network, dealing with names is really easy: one entry on the hosts file and we're done. However, for laptops, that keep going in and out, and need to access services inside the office, the naming is really annoying. I don't know the "standard" process for dealing with these kind of situations. I've considered installing BIND in the office, and make people configure their wireless and wired connections to use that DNS server. What is the correct approach in this situation? If using BIND (or any other DNS server) is the answer, how should I configure it so that people inside the office can use it to get our custom names, and get forwarded to the ISP DNS when trying to reach the internet?

    Read the article

  • How can I bridge a VM to a remote network?

    - by asciiphil
    I have a system running QEMU/KVM (via libvirt). One of its VMs needs to have a presence on a subnet that is not local to the VM host. I have a Linux system on the remote subnet. Is there a way to set up some sort of tunneled bridge to cause the VM to appear present on the remote system? This will be a temporary situation (hopefully just until the VM owner can configure their system) and network performance and long-term maintainability aren't really issues. To give some more concrete information: My VM host has IP address 192.168.54.155/24. The VM has IP address 192.168.65.71/24. I have a remote system at 192.168.65.254/24. Both the VM host and remote system are running Scientific Linux 6.5. I do not control the network or routing in between the VM host and remote system. I do not have access to the guest OS on the VM. I would like traffic to the VM's IP address to end up at the VM even though its host isn't directly connected to the appropriate network. I've tried using iproute2's tunnelling, but Linux won't let me add a tunnel to a bridge. I've considered using some sort of iptables mangling to route traffic over the tunnel and make the VM think it's on the right network, but I'm not sure whether there are better approaches. What's the best way to accomplish this hack?

    Read the article

  • fax server farm architechture

    - by Brian Postow
    I'm not sure this is the right forum for this, but it's not Stackoverflow so... I'm trying to figure out an architecture to solve the following problem, maybe someone here can help: I have a T1 with 23 fax lines coming into the building. I have a computer (Macintosh XServe) running Hylafax. If I had one POTS line, I'd be done. However, I have no idea how to get the T1 into the Mac... Options I've considered: some sort of PCI T1-modem (Does that exist?) Splitting the T1 into 23 POTS lines and then connecting 23 analog modems to the mac, either via an external modem bank (Do they still make those?) or via some sort of external PCI bank, which will allow me to use more than 2 4-port modem cards. Either the T1 or the split POTS lines going into some intermediate device and then transfering the images over IP, or USB to the mac. Really, any other option I can come up with This has GOT to be a problem that someone has already solved, right?

    Read the article

  • Idle state detection for server

    - by odinmillion
    Windows OS has a service that detects idle state. Details: Task Idle Conditions The computer is considered idle if all the processors and all the disks were idle for more than 90% of the past 15 minutes and if there is no keyboard or mouse input during this period of time. When the Task Scheduler service detects that the computer is idle, the service only waits for user input to mark the end of the idle state. It is very useful for usual PCs that have keyboard amd mouse. We can use standard task scheduler to start some process like defrag when PC in idle state and stop when PC isn't in idle state. But what should we use when we using a standalone server without keyboard and mouse? Server sometimes receives commands by TCP/IP and starts CPU and HDD activity. But sometimes CPU and HDD activity at zero level. I would like to use this periods of time to start defrag or another process. But this started at "idle" state processes should be terminated when another commands will appear. So, standard idle state conditions cant help me because we have not got user input to stop idle state. I need more customizable idle state detector. Automatically started processes shouldn't influence to idle state, but PC should go away from idle state when another process will apperar. What should I use? Maybe exists some advansed task scheduler? Or I should write some useful utility on C#? I hope that it is a standard task and all useful utilities already compiled :)

    Read the article

  • Is Windows Server 2008R2 NAP solution for NAC (endpoint security) valuable enough to be worth the hassles?

    - by Warren P
    I'm learning about Windows Server 2008 R2's NAP features. I understand what network access control (NAC) is and what role NAP plays in that, but I would like to know what limitations and problems it has, that people wish they knew before they rolled it out. Secondly, I'd like to know if anyone has had success rolling it out in a mid-size (multi-city corporate network with around 15 servers, 200 desktops) environment with most (99%) Windows XP SP3 and newer Windows clients (Vista, and Win7). Did it work with your anti-virus? (I'm guessing NAP works well with the big name anti-virus products, but we're using Trend micro.). Let's assume that the servers are all Windows Server 2008 R2. Our VPNs are cisco stuff, and have their own NAC features. Has NAP actually benefitted your organization, and was it wise to roll it out, or is it yet another in the long list of things that Windows Server 2008 R2 does, but that if you do move your servers up to it, you're probably not going to want to use. In what particular ways might the built-in NAP solution be the best one, and in what particular ways might no solution at all (the status quo pre-NAP) or a third-party endpoint security or NAC solution be considered a better fit? I found an article where a panel of security experts in 2007 say NAC is maybe "not worth it". Are things better now in 2010 with Win Server 2008 R2?

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >