Search Results

Search found 836 results on 34 pages for 'flexibility'.

Page 20/34 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Taking a Projects Development to the Next Level

    - by user1745022
    I have been looking for some advice for a while on how to handle a project I am working on, but to no avail. I am pretty much on my fourth iteration of improving an "application" I am working on; the first two times were in Excel, the third Time in Access, and now in Visual Studio. The field is manufacturing. The basic idea is I am taking read-only data from a massive Sybase server, filtering it and creating much smaller tables in Access daily (using delete and append Queries) and then doing a bunch of stuff. More specifically, I use a series of queries to either combine data from multiple tables or group data in specific ways (aggregate functions), and then I place this data into a table (so I can sort and manipulate data using DAO.recordset and run multiple custom algorithms). This process is then repeated multiple times throughout the database until a set of relevant tables are created. Many times I will create a field in a query with a value such as 1.1 so that when I append it to a table I can store information in the field from the algorithms. So as the process continues the number of fields for the tables change. The overall application consists of 4 "back-end" databases linked together on a shared drive, with various output (either front-end access applications or Excel). So my question is is this how many data driven applications that solve problems essentially work? Each backend database is updated with fresh data daily and updating each takes around 10 seconds (for three) and 2 minutes(for 1). Project Objectives. I want/am moving to SQL Server soon. Front End will be a Web Application (I know basic web-development and like the administration flexibility) and visual-studio will be IDE with c#/.NET. Should these algorithms be run "inside the database," or using a series of C# functions on each server request. I know you're not supposed to store data in a database unless it is an actual data point, and in Access I have many columns that just hold calculations from algorithms in vba. The truth is, I have seen multiple professional Access applications, and have never seen one that has the complexity or does even close to what mine does (for better or worse). But I know some professional software applications are 1000 times better then mine. So Please Please Please give me a suggestion of some sort. I have been completely on my own and need some guidance on how to approach this project the right way.

    Read the article

  • How can we unify business goals and technical goals?

    - by BAM
    Some background I work at a small startup: 4 devs, 1 designer, and 2 non-technical co-founders, one who provides funding, and the other who handles day-to-day management and sales. Our company produces mobile apps for target industries, and we've gotten a lot of lucky breaks lately. The outlook is good, and we're confident we can make this thing work. One reason is our product development team. Everyone on the team is passionate, driven, and has a great sense of what makes an awesome product. As a result, we've built some beautiful applications that we're all proud of. The other reason is the co-founders. Both have a brilliant business sense (one actually founded a multi-million dollar company already), and they have close ties in many of the industries we're trying to penetrate. Consequently, they've brought in some great business and continue to keep jobs in the pipeline. The problem The problem we can't seem to shake is how to bring these two awesome advantages together. On the business side, there is a huge pressure to deliver as fast as possible as much as possible, whereas on the development side there is pressure to take your time, come up with the right solution, and pay attention to all the details. Lately these two sides have been butting heads a lot. Developers are demanding quality while managers are demanding quantity. How can we handle this? Both sides are correct. We can't survive as a company if we build terrible applications, but we also can't survive if we don't sell enough. So how should we go about making compromises? Things we've done with little or no success: Work more (well, it did result in better quality and faster delivery, but the dev team has never been more stressed out before) Charge more (as a startup, we don't yet have the credibility to justify higher prices, so no one is willing to pay) Extend deadlines (if we charge the same, but take longer, we'll end up losing money) Things we've done with some success: Sacrifice pay to cut costs (everyone, from devs to management, is paid less than they could be making elsewhere. In return, however, we all have creative input and more flexibility and freedom, a typical startup trade off) Standardize project management (we recently started adhering to agile/scrum principles so we can base deadlines on actual velocity, not just arbitrary guesses) Hire more people (we used to have 2 developers and no designers, which really limited our bandwidth. However, as a startup we can only afford to hire a few extra people.) Is there anything we're missing or doing wrong? How is this handled at successful companies? Thanks in advance for any feedback :)

    Read the article

  • ADF Mobile Released!!

    - by Denis T
    ADFmfAnnounce We are pleased to announce the general availability of the newest version of Oracle’s ADF Mobile framework. This new framework provides the much anticipated on-device capabilities that the latest mobile applications require.  Feature Highlights Java - Oracle brings a Java VM embedded with each application so you can develop all your business logic in the platform neutral language you know and love! (Yes, even iOS!) JDBC - Since we give you Java, we also provide JDBC along with a SQLite driver and engine that also supports encryption out of the box. Multi-Platform - Truly develop your application only once and deploy to multiple platforms. iOS and Android platforms are supported for both phone and tablet. Flexible - You can decide how to implement the UI: (a) Use existing server-based UI framework like JSF. (b) Use your own favorite HTML5 framework like JQuery. (c) Use our declarative HTML5 component set provided with the framework. ADF Mobile XML or AMX for short, provides all the normal input and layout controls you expect and we also add charts/maps/gauges along with it to provide a very comprehensive UI controls. You can also mix and match any of the three for ultimate flexibility! Device Feature Access - You can get access to device features from either Java or JavaScript to invoke features like camera, GPS, email, SMS, contacts, etc. Secure - ADF Mobile provides integrated security that works with your server back-end as well. Whether you’re using remote URLs, local HTML or AMX, you can secure any/all of your features with a single consistent login page. Since we also give you SQLite encryption, we are assured that your data is safe. Rapid - Using the same development techniques that ADF developers are already used to, you can quickly create mobile applications without ever learning another language! Architecture ADF Mobile is a “hybrid” architecture that employs a natively built “container” on each platform that hosts a number of browser windows that are used to display the application content. We add the Java VM as a natively built library to the container for business logic.   How To Get Started ADF Mobile is an extension to the recently released JDeveloper version 11.1.2.3.0. Simple get the latest JDeveloper from Oracle Technology Network and use the Check for Updates feature to get the ADF Mobile extension. Note: ADF Mobile does not require developers to learn any other languages or frameworks but to build/deploy to iOS, you must be on an Apple MacintoshTM and have Xcode installed. To build/deploy to Android™ you must have the Android SDK installed.

    Read the article

  • 2d, Top-down map with different levels

    - by Ktash
    So, I'm creating a 2d, top down, sprite based (tiled) game, and right now I'm working on maps (well, a map editor at the moment, but it will be creating my maps, so basically the same thing). The scenario So, I'm thinking about efficiency and creating a map in pieces. In each piece, I plan on having 'layers'. Basically, I plan on rendering it down to a 'below hero' level, and an 'above hero' level, with the hero rendered in between obviously. There will likely also be a 'on level with hero' layer, but I'm not quite there yet. Not even worrying about events or interaction yet. Just looking to get a hero on the screen. Now for movement, I obviously need to know what tiles can be moved and in what direction. My plan at the moment is each tile getting 8 bits (4 'can enter in direction' bits, 4 'can leave in direction'). This will allow me to limit movement and even allow one way directional movement. The dilemma This works great for a lot of scenarios. It will allow me to store a map in essentially 3 layers, a string, and gives me flexibility going forward. However, I can't create maps that themselves have layers. A good example is a bridge where the user can go under or over the bridge without invalid moves being allowed. I can't create a platform and allow movement underneath. These are things I would like to be able to include in my game. My idea In theory, I could allow multiple hero layers and then allow multiple sets of 'below' and 'above' layers (or sandwich layers). But this complicates my system, and makes movement between maps potentially tricky (If the hero is on the third layer at the edge of a map, but that corresponds to the second layer on the other map, how can I allow or disallow movement). My question Is there a better way to manage multiple maps with multiple levels like this where a users level may be 'connected' on different levels on different maps? Or even... Am I doing this the hard way? Is there a more standard way to handle top-down 2d tiled maps that I am just not aware of? Things to note or that might be helpful This will be done in Javascript (transferred around in JSON) State will need to be transferred quickly, so a map-id and x/y/direction should be enough to get me a boolean 'can move' value Maps will not be standard sized (though they will be in a certain number of tiles) Making an editor tool so that I can have others help, so something that I can create in a tool would be helpful 'Teleportation' locations will likely need to exist to get into building maps and to and from different map sets (which will not necessarily be connected), but have not been created yet (lumping in with events at the moment).

    Read the article

  • Increase Availability for Data Center Virtual Environments

    - by Antoinette O'Sullivan
    With Oracle VM, you can increase availability and add flexibility for data center virtual environments. To get started, take training on Oracle VM Server for x86 and Oracle VM Server for SPARC as appropriate for your systems. You can take these live instructor-led courses from your own desk as a live-virtual event or travel to an education center for an in-class event. The Oracle VM Administration: Oracle VM Server for x86 course, in 3 days, teaches you about creating NFS and iSCI repositories, migration, cloning and exercising high availabillity. In-class events already on the schedule include:  Location  Date  Delivery Language  Zagreb, Croatia  11 November 2013  Croatian  Prague, Czech Republic  21 October 2013  Czech  Ballerup, Denmark  26 August 2013  English  Bordeaux, France  18 September 2013  French  Paris, France  9 October 2013  French  Strasbourg, France  11 September 2013  French  Hamburg, Germany  30 Septemeber 2013  German  Munich, Germany  28 October 2013  German  Budapest, Hungary  9 September 2013  Hungarian  Riga, Latvia  30 September 2013  Latvian  Oslo, Norway  16 September 2013  English  Warsaw, Poland  28 October 2013  Polish  Bucharest, Romania  14 October 2013  English  Istanbul, Turkey  23 December 2013  Turkish  Indonesia, Jakarta  19 August 2013  English  Canberra, Australia  4 November 2013  English  Melbourne, Australia  6 November 2013  English  Sydney, Australia  25 November 2013  English  San Francisco, CA, United States  16 September 2013  English  Roseville, MN, United States  21 October 2013  English  St Louis, MO, United States  11 November 2013  English  Reston, VA, United States  31 July 2013  English  Buenos Aires, Argentina  21 August 2013  Spanish The Oracle VM Server for SPARC: Installation and Configuration course, in 2 days, teaches you about configuring control and service domains, creating guest domains, using virtual disks and networks, and migration. In-class events already on the schedule include:  Location  Date  Delivery Language  Budapest, Hungary  12 September 2013  Hungarian  Prague, Czech Republic  9 September 2013  Czech  Colombes, France  7 October 2013  French  Stuttgart, Germany  28 October 2013  German  Madrid, Spain  5 September 2013  Spanish  Istanbul, Turkey 30 September 2013  Turkish   Petaling Jaya, Malaysia 15 August 2013  English   Singapore 5 August 2013  English   Cnaberra, Australia  12 August 2013 English  Melbourne, Australia  30 October 2013 English  Sydney, Australia  26 August 2013 English To register for a course or to learn more about Oracle's virtualization curriculum, go to http://education.oracle.com/virtualization.

    Read the article

  • Let&rsquo;s keep informed with &ldquo;Data Explorer&rdquo;

    - by Luca Zavarella
    At Pass Summit 2011 a new project was announced. It’s a Microsoft SQL Azure Lab and its codename is Microsoft “Data Explorer”. According to the official blog (http://blogs.msdn.com/b/dataexplorer/), this new tool provides an innovative way to acquire new knowledge from the data that interest you. In a nutshell, Data Explorer allows you to combine data from multiple sources, to publish and share the result. In addition, you can generate data streams in the RESTful open format (Open Data Protocol), and they can then be used by other applications. Nonetheless we can still use Excel or PowerPivot to analyze the results. Sources can be varied: Excel spreadsheets, text files, databases, Windows Azure Marketplace, etc.. For those who are not familiar with this resource, I strongly suggest you to keep an eye on the data services available to the Marketplace: https://datamarket.azure.com/browse/Data To tell the truth, as I read the above blog post, I was tempted to think of the Data Explorer as a "SSIS on Azure" addressed to the Power User. In fact, reading the response from Tim Mallalieu (Group Program Manager of Data Explorer) to the comment made to his post, I had a positive response to my first impression: “…we originally thinking of ourselves as Self-Service ETL. As we talked to more folks and started partnering with other teams we realized that would be an area that we can add value but that there were more opportunities emerging.” The typical operations of the ETL phase ( processing and organization of data in different formats) can be obtained thanks to Data Explorer Mashup. This is an image of the tool: The flexibility in the manipulation of information is given by Data Explorer Formula Language. This is a formula-based Excel-style specific language: Anyone wishing to know more can check the project page in addition to aforementioned blog: http://www.microsoft.com/en-us/sqlazurelabs/labs/dataexplorer.aspx In light of this new project, there is no doubt about the intention of Microsoft to get closer and closer to the Power User, providing him flexible and very easy to use tools for data analysis. The prime example of this is PowerPivot. The question that remains is always the same: having in a company more Power User will implicitly mean having different data models representing the same reality. But this would inevitably lead to anarchical data management... What do you think about that?

    Read the article

  • Hardware for a home server running Windows Server 2008 R2 Hyper-V or Microsoft Hyper-V Server 2008 R2

    - by David Hayes
    Hi, I'm planning to build a server to do the following Act as a file server (videos, pictures music) Run Squeezebox server Run Zune Software to allow wireless syncing to Windows Phone 7 I'd also like to aim for Low power usage (i'd settle for less than the 90-100Watts I'm using atm Flexibility, I might want to add a web server or sharepoint or... Something I can learn/test on, work is mainly a Windows shop but I do have Linux experience too I'd like to take a look at App-V (application virtualization) too I'd like it to cost less than $1000 Quiet would be nice but not essential (it'll be in the basement) I'm thinking of getting a technet subscription to get access to Windows Server 2008 R2 at a reasonable price ($199) So my plan was this Get a bunch of 2TB Caviar green drives to RAID up (RAID 1 or 6 probably) Get a Quad core CPU (Intel i5/i7 probably) Install a Hypervisor Install w2k8 R2 Storage Server for a NAS Install Windows 7 Pro to run Zune/Squeeze box Install any other machines I want to play with Questions Can anyone see any issues with this or have any better ideas? Do you think I'd need an i7 over an i5? Is 4 cores enough/too much? Can anyone sugest a nice, reasonably priced case that will hold 6-8 drives and stay cool Should I wait for Sandy Bridge parts?

    Read the article

  • Validating signature trust with gpg?

    - by larsks
    We would like to use gpg signatures to verify some aspects of our system configuration management tools. Additionally, we would like to use a "trust" model where individual sysadmin keys are signed with a master signing key, and then our systems trust that master key (and use the "web of trust" to validate signatures by our sysadmins). This gives us a lot of flexibility, such as the ability to easily revoke the trust on a key when someone leaves, but we've run into a problem. While the gpg command will tell you if a key is untrusted, it doesn't appear to return an exit code indicating this fact. For example: # gpg -v < foo.asc Version: GnuPG v1.4.11 (GNU/Linux) gpg: armor header: gpg: original file name='' this is a test gpg: Signature made Fri 22 Jul 2011 11:34:02 AM EDT using RSA key ID ABCD00B0 gpg: using PGP trust model gpg: Good signature from "Testing Key <[email protected]>" gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: ABCD 1234 0527 9D0C 3C4A CAFE BABE DEAD BEEF 00B0 gpg: binary signature, digest algorithm SHA1 The part we care about is this: gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. The exit code returned by gpg in this case is 0, despite the trust failure: # echo $? 0 How do we get gpg to fail in the event that something is signed with an untrusted signature? I've seen some suggestions that the gpgv command will return a proper exit code, but unfortunately gpgv doesn't know how to fetch keys from keyservers. I guess we can parse the status output (using --status-fd) from gpg, but is there a better way?

    Read the article

  • Proper Network Infastructure Setup DMZ, VPN, Routing Hardware Question

    - by NickToyota
    Greetings Server Fault Universe, So here's a quick background. Two weeks ago I started a new position as the systems administrator for an expanding health services company of just over 100 persons. The individual I was replacing left the company with little to no notice. Basically, I have inherited a network of one main HQ (where I am situated) which has existed for over 10 years, with five smaller offices (less than 20 persons). I am trying to make sense of the current setup. The network at the HQ includes: Linksys RV082 Router providing internet access for employees and site to site VPN connecting the smaller offices (using an RV042 each). We have both cable and dsl lines connected to balance traffic (however this does not work at all and is not my main concern right now). Cisco Ironport appliance. This is the main gateway for our incoming and outgoing emails. This also has an external IP and internal IP. Lotus domino in and out email servers connected to the mentioned Cisco gateway. These also have an external IP and internal IP. Two windows 2003 and 2008 boxes running as domain controllers with DNS of course. These also have both an external IP and internal IP. Website and web mail servers also on both external and internal IPs. I am still confused as why there are so many servers connected directly to the internet. I am seriously looking to redesign this setup with proper security practices in mind (my highest concern) and am in need of a proper firewall setup for the external/internal servers along with a VPN solution about 50 employees. Budget is not a concern as I have been given some flexibility to purchase necessary solutions. I have been told Cisco ASA appliance may help. Does anyone out in the Server Fault Universe have some recommendations? Thank you all in advance.

    Read the article

  • What are best practices on virtual lab/test bed architecture?

    - by WooYek
    I am currently preparing a new small virtual environment for development and testing with Windows Server + SQL Server + AD + Sharepoint + Exchange + IIS(ASP.NET) + Biztalk + ?, for a small (up to 5) dev team. What are pros and cons on different approaches, eg. splitting up over different machines or packing everything up per machine. I your experience what are the best practices I should follow in terms of architecture and various system/servers placement. What to share and what to split per person. I would like to achieve some flexibility for the dev and testing process (so teammebers would not be steeping on each other's toes) and limit administrative effort needed to propagate settings, integrate work items and revert changes when something breaks up. It's not supposed to be an everyday development working environment, more a tier 2 developer testing environment, and not yet an integration or QA testing environment with formal change process. IMO the two borderline solutions are: creating one all-inclusive machine for each dev team member giving them freedom to manage creating shared environment managed by the one with somehow formalized change request process What golden mean would you recommend, and why?

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • Load balanced IIS. Should I use NLB, or linux-based reverse proxy, or something else?

    - by growse
    What would be the best approach for load-balancing at least 2-3 Windows 2008 R2 IIS webservers running a multitude of .NET applications? My choices appear to be: 1) Hardware-based network device load balancer, like a Cisco CSS 2) Windows NLB 3) Some sort of linux based proxy, either haproxy or other The three servers sit as VMs on a vSphere farm, so I have the ability to clone to up the instance count in times of high load. I control the switch that the vSphere hosts are plugged into (Cisco 3750), but don't control the switching/routing infrastructure beyond that to the clients. (1) Is too expensive, and probably overkill for my needs. I've included this in case someone figures out a cunning way to do it on my existing network kit, which I doubt. (2) would seem to be the obvious "built-in" option, but seems to be quite fiddly messing around with network interfaces, multicast, and generally other things that seem to be needlessly complex. It's also fairly stupid, in that it can't remove hosts from the pool if they start throwing 500 errors or otherwise go wrong (3) is the most interesting option, as it would appear to offer the most flexibility and customizability, but without having to mess around with the network. However, while I'm familiar with the reverse-proxy capabilities of lighttpd etc, I'm not that well read on other options like HAProxy, which might be able to offer a lot more. Which would you go for, and is there anything I've not thought of?

    Read the article

  • Exchange 2010 DAG + VMWare HA = no support?

    - by Dan
    We currently have an Exchange 2003 clustered environment (two machine cluster) that we're looking to upgrade to 2010. We recently purchased a VMWare virtualization environment (three Dell R710's with an EMC NS-120 serving up NFS datastores - iSCSI is available) that we wish to use for this new environment. I'm seeing that Microsoft does not support Exchange 2010 DAGs with a virtualization high availability solution (see links below). I would like to utilize the DAG to ensure the data stays available if one host goes down, and HA to ensure that if the physical host goes down, the VM will come back up on the other available host. Does anybody know why MS does not support this? VMWare HA will only restart the VM if it is hung/down - I don't see any difference between this and restarting the physical box if someone pulled the power... Will we only run into issues with support if it has something to do with HA/DAG failover or will they see we have HA and tell us to put it on a physical box even if it has nothing to do with HA? If we disable HA for these VM's will that satisfy them on a support case? Has anybody set up an Exchange 2010 DAG on VMware with HA enabled? Will they have any issues with using an NFS datastore? We have much greater flexibility on the EMC with NFS vs iSCSI, so I would prefer to continue utilizing that. Thanks for any input! http://www.vmwareinfo.com/2010/01/verifying-microsoft-exchange-2010.html Take a look at the second image under "Not Supported" http://technet.microsoft.com/en-us/library/aa996719.aspx "Microsoft doesn't support combining Exchange high availability solutions (database availability groups (DAGs)) with hypervisor-based clustering, high availability, or migration solutions. DAGs are supported in hardware virtualization environments provided that the virtualization environment doesn't employ clustered root servers."

    Read the article

  • Best tool for monitoring backups, etc. and trending statstics from that data

    - by Randy Syring
    I have done some research on nagios, opennms, and zenoss but am not confident that I have found what I am looking for. The main driving force for me right now is being able to monitor backups. This includes mysql, mssql, and eventually some file system backups. We have a tool that wraps the backup process for these different systems and collects statistics. So, items like: number of databases backed up size of db backup file size of db backup file compressed time to make backup time to zip file I want to be able to A) have notifications if the jobs are not run according to schedule B) be able to set thresholds on the statistics which would trigger notifications C) I want to be able to trend and graph the statistics I am planning on sending this information to the monitoring application through an HTTP POST. Or, the monitoring application could pull it from a log file as well. However, we will have other processes with other "arbitrary" (from the monitoring system's perspective) statics that will want to monitor and trend, so flexibility is very important. The tool or tools should also be able to do general monitoring and trending of network interfaces, server load, etc. Once we get the backup monitoring in place, we will want to include those items as well. Thanks.

    Read the article

  • Recommended open-source firmware for ASUS RT-N16

    - by MasterF
    I have recently acquired an ASUS RT-N16 router. My original plan for it was to install Tomato on it. However, after checking their website i found out that the firmware was not updated in the last 2 years. There seem to be a few updated mods but none of them really seemed mature/stable/well-documented. I would like to know what other people recommend as open-source firmware for this router. I know the answers will probably be subjective; so i will give a bit of background on my needs: for now i will only use the Wi-Fi on an Android phone the connection will not be shared with anyone (so QOS is optional) i want a stable (wired) connection on my PC (for online gaming etc.) i want the (wired) download/upload speeds to be as close as possible to those achieved by directly plugging the Ethernet cable to the PC's network card; i have a 100 Mbps connection my ISP uses PPPOE my technical level: i am a software developer and i have good knowledge of bash scripting, but no experience with networking Also, i know that i could probably just use the stock firmware (and maybe will use it for a while), but i'm interested in trying an open-source version (for more features, flexibility, as a learning exercise etc.)

    Read the article

  • Understanding the Mounting of a Filesystem

    - by Tom H.
    I'm new to linux and want to check my understanding of how mounting/filesystems work. I read related manpages, but just want to be sure. I have a partition say /dev/sda5 that is currently mounted to /home with various subdirs. It is my understanding that this means /dev/sda5 has its own portable filesystem that can be moved anywhere in the main filesystem. Questions: If I unmount /dev/sda5 from /home (# umount /home) and then mount it to /var/www/ (which is empty) (# mount -t ext3 /dev/sda5 /var/www) and replace the fstab entry, with /dev/sda5 /var/www ext3 defaults,noatime,nodev 1 2 and # mount -a, Q1) are all of the contents of /home now accessible under /var/www/ (i.e. /home/username -> /var/www/username)? Q2) Are all of the permissions from the /home filesystem kept intact in this new location? Anything else I should be concerned with? Just want to make sure I don't go wipe/corrupt anything. Coming from Windows the filesystem architecture takes getting used to (though I'm loving the flexibility!).

    Read the article

  • Port 22 is not responding

    - by Emanuele Feliziani
    I'm trying to make the jump to VPS from shared hosting for better performances and greater flexibility, but am stuck with the fact that I can't access the machine via ssh. First of all, the machine is a CentOS 6.3 cPanel x64 with WHM 11.38.0. Sshd is running (it appears in the current running processes). Making a port scan I see that port 22 is not responding. Port 21 is, but I am not able to access the machine via ftp (I think it's a security measure, but I don't know where to disable/enable it). So, I'm stuck in WHM and have no way to access the configuration of the machine, neither via ssh nor with ftp/sftp. When trying to connect with ssh via Terminal I only get this: ssh: connect to host xx.xx.xxx.xxx port 22: Operation timed out I also tried to access with the hostname instead of the IP address and it's the same. There seem to be no firewall in WHM and I have whitelisted my home IP address to access ssh, though there were no restrictions in the first place. I have been wandering through all the settings and options in WHM for several hours now, but can't seem to find anything. Does anybody have a clue as to where I should start investigating? Update: Thanks everyone. It was in fact a matter of firewall. There was a firewall not controlled by the WHM software. I managed to crack into the console from the vps control panel (a terrible, terrible java app that barely took my keyboard input) and disabled the firewall altogether running service iptables stop so that I was able to access the console via ssh with the terminal. Now I will have to set up the firewall again because the command I ran looks like having completely wiped the iptables. Can you recommend any newby-friendly resource where I can learn how to go about this and what should I block? Or should I just go with something like this: http://configserver.com/cp/csf.html ? Thanks again to everyone who helped me out.

    Read the article

  • Proxmox drbd configuration split brain [on hold]

    - by AudioDan
    I am planning a proxmox HA configuration with two Dell R710 machines (dual 6 core processors in each) with enterprise level drive raid arrays. I would be using DRBD with a quorum disk on a third machine. I would dedicate two 1GB nics on each server to the DRBD communications. We would have approximately 12 to 14 Virtual Machines running on this pair of servers. The proxmox manual recommends creating two DRBD resources - one for the Virtual Machines that normally run on ServerA and one for the Virtual Machines that normally run on ServerB. This is because of the Primary/Primary state in which this configuration runs. If both servers have VMs talking to the same DRBD resource and a split brain situation occurs, there is potential for data corruption that must be resolved. While I understand it would take more effort to create new virtual machines, can anybody foresee any potential problems with running a separate DRBD resource for each VM instead? Does anyone have experience running a setup that way and has it worked well? It seems to me that would allow more flexibility in moving machines back and forth.

    Read the article

  • Replace Linux Boot-Drive | ext3 to btrfs

    - by bardiir
    I've got a headless server running Debian Linux currently. Linux vault 3.2.0-3-686-pae #1 SMP Mon Jul 23 03:50:34 UTC 2012 i686 GNU/Linux The root filesystem is located on an ext3 partition on the main harddrive. My data is located on multiple harddrives that are bundled to a storage pool running with btrfs. UUID=072a7fce-bfea-46fa-923f-4fb0827ae428 / ext3 errors=remount-ro 0 1 UUID=b50965f1-a2e1-443f-876f-578b5f93cbf1 none swap sw 0 0 UUID=881e3ad9-31c4-4296-ae60-eae6c98ea45f none swap sw 0 0 UUID=30d8ae34-e2f0-44b4-bbcc-22d761a128f6 /data btrfs defaults,compress,autodefrag 0 0 What I'd like to do is to place / into the btrfs pool too. The ideal solution would provide the flexibility to boot from any disk in the system alike, so if the main drive fails I'd just need to swap another one into the main slot and it would be bootable like the main one. My main problem is, everything I do needs to result in a bootable system that is open to ssh logins via network as this server is 100% headless so there is no possibility to boot it from a live cd or anything like that. So I'd like to be extra sure everything works out fine :) How would I best go about this? Can anybody hint me to guides or whip something up for these tasks? Anything I forgot to think about? Copy root-data into btrfs pool, adjust mountpoints,... Adjust GRUB to boot from btrfs pool UUID or the local device where GRUB is installed Sync GRUB to all harddrives so every drive is equally bootable (is this even possible without destroying the btrfs partitions on the drives or would I need to disconnect the drives, install grub on them and then connect them back with a slightly smaller partition?)

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • 150 TB and growing, but how to grow?

    - by seandavi
    My group currently has two largish storage servers, both NAS running debian linux. The first is an all-in-one 24-disk (SATA) server that is several years old. We have two hardware RAIDS set up on it with LVM over those. The second server is 64 disks divided over 4 enclosures, each a hardware RAID 6, connected via external SAS. We use XFS with LVM over that to create 100TB useable storage. All of this works pretty well, but we are outgrowing these systems. Having build two such servers and still growing, we want to build something that allows us more flexibility in terms of future growth, backup options, that behaves better under disk failure (checking the larger filesystem can take a day or more), and can stand up in a heavily concurrent environment (think small computer cluster). We do not have system administration support, so we administer all of this ourselves (we are a genomics lab). So, what we seek is a relatively low-cost, acceptable performance storage solution that will allow future growth and flexible configuration (think ZFS with different pools having different operating characteristics). We are probably outside the realm of a single NAS. We have been thinking about a combination of ZFS (on openindiana, for example) or btrfs per server with glusterfs running on top of that if we do it ourselves. What we are weighing that against is simply biting the bullet and investing in Isilon or 3Par storage solutions. Any suggestions or experiences are appreciated.

    Read the article

  • Expand a volume residing on one X-RAID disk installed on a Netgear ReadyNas Duo v2

    - by Sid
    I've got a Netgear ReadyNas Duo v2 (2 disk slots). System is configured with X-RAID which does not provide flexibility but automatically expands based on a sort of RAID-5 logic. I had 2 500 GB hard disk installed, redundant, so I had 500 GB of volume size. I wanted to upgrade the whole system to 3 GB * 2 hard disk maintaining both the data already on the NAS and the data on one of the two 3 TB hard disks. So I did this: Unplugged one disk from the ReadyNas. Now the readynas has 1*500 GB non redundant. Plugged one empty 3 TB hard disk. Now the readynas has 1*500 GB + 1*3 TB, redundant. I waited for the resync. I then unplugged the 500 GB hard disk, so that I have only the 3 TB hard disk with the previous data. Now what I want is to copy the data on my other 3 TB hard disk in the NAS, so that I can plug this other disk in the NAS and use it for redundancy. The problem is that: the NAS has the (single) 3 TB hard disk in X-RAID, but the volume does not expand to 3 TB, it remains fixed to 500 GB. Is there a way to tell the ReadyNas to force expanding the volume to the whole disk without plugging in another hard disk of the same size?

    Read the article

  • Webcast Q&A: Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter

    - by Kellsey Ruppel
    This week we had the fifth webcast in our WebCenter in Action webcast series, "Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter", where customers Giovani Dacumos and Minh Ong from the Los Angeles Department of Building & Safety (LADBS), and Sheetal Paranjpye and Rajiv Desai from Oracle Partner 3Di, shared how Oracle WebCenter is powering LADBS' externally facing website and providing a superior self-service experience for their customers. We asked the speakers to provide some dialogue for Q&A.   Giovani Dacumos, Director of Systems and Minh Ong, LADBS Q: Did you run into any issues when integrating all of the different applications together?A: Yes. We did have issues integrating a secure sign on between the portal and other legacy applications. We used portlets and iframes to overcome those.  This is a new technology for us and we are also learning as we go so there were a lot of challenges in developing and implementing our vision. Q: What has been the biggest benefit your end users have seen?A: The biggest benefit for our ends users is ease-of-use. We've given them a system that provided a new and improved source of information, as well as a very organized flow of transaction processing. It has made our online service very user friendly. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: There was no internal resistance during the implementation, only challenges. As mentioned earlier, this is a new technology for us. We've come across issues that needed assistance from Oracle. Working with 3Di and Oracle has helped us tremendously to find solutions to our implementation issues. Q: Given the performance, what do you estimate to be the top end capacity of the system? A: With the current performance and architecture we have, we are able to support approx 300-400 concurrent users.  We would need more hardware to support additional user load. Q: What's the overview or summary of feedback from the users interacting with the site?A: LADBS has a wide spectrum of customers, from simple users like homeowners to large construction firms. Anything new that we offer could be a little bit challenging for some, but overall, the customers liked it. They saw a huge improvement on the usability. Q: Can you describe the impressions about the site before and after the project within LADBS?A: The old site was using old technology and it was hard for us to keep on building into it as we got more business requirements. It made our application seem a bit complicated.  It was confusing for our new customers to use and we've improved on this with the new site. It's now easier for them to complete their transactions and, at the same time, allowed us to provide more useful information. Sheetal Paranjpye and Rajiv Desai, 3Di Q: Did you run into any obstacles when implementing the solution?A: Yes we did run into some obstacles. One of the key show stoppers was the issue with portlet to portal communication. The GIS viewer (portlet) needed information to be passed  to and from Permit LA (Portal), but we were able to get everything configured and up and working quickly! Q: Was there a lot of custom work that needed to be done for this particular solution?A: We have done some customizations where workflows/ Task flows are involved.  Q: What do you think were the keys to success for rolling out WebCenter?A: Having a service oriented architecture and using portlets have been the key areas for rolling out Oracle WebCenter at LADBS. The Oracle WebCenter Content integration allows the flexibility to business users to maintain the content, which has really cut down on the reliance of IT, and employee productivity has increased as a result. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action! Los Angeles Department of Building & Safety Lowers Customer Service Costs with Oracle WebCenter from Oracle WebCenter

    Read the article

  • The Microsoft Ajax Library and Visual Studio Beta 2

    - by Stephen Walther
    Visual Studio 2010 Beta 2 was released this week and one of the first things that I hope you notice is that it no longer contains the latest version of ASP.NET AJAX. What happened? Where did AJAX go? Just like Sting and The Police, just like Phil Collins and Genesis, just like Greg Page and the Wiggles, AJAX has gone out of band! We are starting a solo career. A Name Change First things first. In previous releases, our Ajax framework was named ASP.NET AJAX. We now have changed the name of the framework to the Microsoft Ajax Library. There are two reasons behind this name change. First, the members of the Ajax team got tired of explaining to everyone that our Ajax framework is not tied to the server-side ASP.NET framework. You can use the Microsoft Ajax Library with ASP.NET Web Forms, ASP.NET MVC, PHP, Ruby on RAILS, and even pure HTML applications. Our framework can be used as a client-only framework and having the word ASP.NET in our name was confusing people. Second, it was time to start spelling the word Ajax like everyone else. Notice that the name is the Microsoft Ajax Library and not the Microsoft AJAX library. Originally, Microsoft used upper case AJAX because AJAX originally was an acronym for Asynchronous JavaScript and XML. And, according to Strunk and Wagnell, acronyms should be all uppercase. However, Ajax is one of those words that have migrated from acronym status to “just a word” status. So whenever you hear one of your co-workers talk about ASP.NET AJAX, gently correct your co-worker and say “It is now called the Microsoft Ajax Library.” Why OOB? But why move out-of-band (OOB)? The short answer is that we have had approximately 6 preview releases of the Microsoft Ajax Library over the last year. That’s a lot. We pride ourselves on being agile. Client-side technology evolves quickly. We want to be able to get a preview version of the Microsoft Ajax Library out to our customers, get feedback, and make changes to the library quickly. Shipping the Microsoft Ajax Library out-of-band keeps us agile and enables us to continue to ship new versions of the library even after ASP.NET 4 ships. Showing Love for JavaScript Developers One area in which we have received a lot of feedback is around making the Microsoft Ajax Library easier to use for developers who are comfortable with JavaScript. We also wanted to make it easy for jQuery developers to take advantage of the innovative features of the Microsoft Ajax Library. To achieve these goals, we’ve added the following features to the Microsoft Ajax Library (these features are included in the latest preview release that you can download right now): A simplified imperative syntax – We wanted to make it brain-dead simple to create client-side Ajax controls when writing JavaScript. A client script loader – We wanted the Microsoft Ajax Library to load all of the scripts required by a component or control automatically. jQuery integration – We love the jQuery selector syntax. We wanted to make it easy for jQuery developers to use the Microsoft Ajax Library without changing their programming style. If you are interested in learning about these new features of the Microsoft Ajax Library, I recommend that you read the following blog post by Scott Guthrie: http://weblogs.asp.net/scottgu/archive/2009/10/15/announcing-microsoft-ajax-library-preview-6-and-the-microsoft-ajax-minifier.aspx Downloading the Latest Version of the Microsoft Ajax Library Currently, the best place to download the latest version of the Microsoft Ajax Library is directly from the ASP.NET CodePlex project: http://aspnet.codeplex.com/ As I write this, the current version is Preview 6. The next version is coming out at the PDC. Summary I’m really excited about the future of the Microsoft Ajax Library. Moving outside of the ASP.NET framework provides us the flexibility to remain agile and continue to innovate aggressively. The latest preview release of the Microsoft Ajax Library includes several major new features including a client script loader, jQuery integration, and a simplified client control creation syntax.

    Read the article

  • What is the Oracle Utilities Application Framework?

    - by Anthony Shorten
    The Oracle Utilities Application Framework is a reusable, scalable and flexible java based framework which allows other products to be built, configured and implemented in a standard way. Note: Even though the Framework is built in java it can be integrated with COBOL based extensions for backward compatibility. When Oracle Utilities Customer Care & Billing was migrated from V1 to V2, it was decided that the technical aspects of that product be separated to allow for reuse and independence from technical issues. The idea was that all the technical aspects would be concentrated in this separate product (i.e. a framework) and allow all products using the framework to concentrate on delivering superior functionality. The product was named the Oracle Utilities Application Framework (oufw is the product code). The technical components are contained in the Oracle Utilities Application Framework which can be summarized as follows: Metadata - The Oracle Utilities Application Framework is responsible for defining and using the metadata to define the runtime behavior of the product. All the metadata definition and management is contained within the Oracle Utilities Application Framework. UI Management - The Oracle Utilities Application Framework is responsible for defining and rendering the pages and responsible for ensuring the pages are in the appropriate format for the locale. Integration - The Oracle Utilities Application Framework is responsible for providing the integration points to the architecture. Refer to the Oracle Utilities Application Framework Integration Overview for more details Tools - The Oracle Utilities Application Framework provides a common set of facilities and tools that can be used across all products. Technology - The Oracle Utilities Application Framework is responsible for all technology standards compliance, platform support and integration. There are a number of products from the Tax and Utilities Global Business Unit as well as from the Financial Services Global Business Unit that are built upon the Oracle Utilities Application Framework. These products require the Oracle Utilities Application Framework to be installed first and then the product itself installed onto the framework to complete the installation process. There are a number of key benefits that the Oracle Utilities Application Framework provides to these products: Common facilities - The Oracle Utilities Application Framework provides a standard set of technical facilities that mean that products can concentrate in the unique aspects of their markets rather than making technical decisions. Common methods of configuration - The Oracle Utilities Application Framework standardizes the technical configuration process for a product. Customers can effectively reuse the configuration process across products. Multi-lingual and Multi-platform - The Oracle Utilities Application Framework allows the products to be offered in more markets and across multiple platforms for maximized flexibility. Common methods of implementation - The Oracle Utilities Application Framework standardizes the technical aspects of a product implementation. Customers can effectively reuse the technical implementation process across products. Quicker adoption of new technologies - As new technologies and standards are identified as being important for the product line, they can be integrated centrally benefiting multiple products. Cross product reuse - As enhancements to the Oracle Utilities Application Framework are identified by a particular product, all products can potentially benefit from the enhancement. Note: Use of the Oracle Utilities Application Framework does not preclude the introduction of product specific technologies or facilities to satisfy market needs. The framework minimizes the need and assists in the quick integration of a new product specific piece of technology (if necessary). The Framework is not available as a product itself and is bundled with Tax and Utilities Global Business Unit prodicts. At the present time the following products are on the Framework: Oracle Utilities Customer Care And Billing (V2 and above) Oracle Enterprise Taxation Management (V2 and above) Oracle Utilities Business Intelligence (V2 and above) Oracle Utilities Mobile Workforice Management (V2 and above)

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >