Search Results

Search found 59995 results on 2400 pages for 'web experience management'.

Page 393/2400 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • Need help/guidance about creating a desktop application with gui

    - by Somebody still uses you MS-DOS
    I'm planning to do an Desktop application using Python, to learn some Desktop concepts. I'm going to use GTK or Qt, I still haven't decided which one. Fact is: I would like to create an application with the possibility to be called from command line, AND using a GUI. So it would be useful for cmd fans, and GUI users as well. It would be interesting to create a web interface too in the future, so it could be run in a server somewhere using an html interface created with a template language. I'm thinking about two approaches: - Creating a "model" with a simple interface which is called from a desktop/web implementation; - Creating a "model" with an html interface, and embeb a browser component so I could reuse all the code in both desktop/web scenarios. My question is: which exactly concepts are involved in this project? What advantages/disadvantages each approach has? Are they possible? By naming "interface", I'm planning to just do some interfaces.py files with def calls. Is this a bad approach? I would like to know some book recommendations, or resources to both options - or source code from projects which share the same GUI/cmd/web goals I'm after. Thanks in advance!

    Read the article

  • Twitter API similar to Google Alert

    - by Felix Perdana
    I am trying to create a web application which have a similar functionality with Google Alerts. (by similar I mean, the user can provide their email address for the alert to be sent to, daily or hourly) The only limitation is that it only gives alerts to user based on a certain keyword or hashtag. I think that I have found the fundamental API needed for this web application. https://dev.twitter.com/docs/api/1/get/search The problem is I still don't know all the web technologies needed for this application to work properly. For example, Do I have to store all of the searched keywords in database? Do I have to keep pooling ajax request all the time in order to keep my database updated? What if the keyword the user provided is very popular right now that might have thousands of tweets just in an hour (not to mention, there might be several emails that request several trending topics)? By the way, I am trying to build this application using PHP. So please let me know, what kind of techniques I need to learn for such web app (and some references maybe)? Any kind of help will be appreciated. Thanks in advance :) Regards, Felix Perdana

    Read the article

  • ASP.NET MVC 3 Hosting :: Deploying ASP.NET MVC 3 web application to server where ASP.NET MVC 3 is not installed

    - by mbridge
    You can built sample application on ASP.NET MVC 3 for deploying it to your hosting first. To try it out first put it to web server where ASP.NET MVC 3 installed. In this posting I will tell you what files you need and where you can find them. Here are the files you need to upload to get application running on server where ASP.NET MVC 3 is not installed. Also you can deploying ASP.NET MVC 3 web application to server where ASP.NET MVC 3 is not installed like this example: you can change reference to System.Web.Helpers.dll to be the local one so it is copied to bin folder of your application. First file in this list is my web application dll and you don’t need it to get ASP.NET MVC 3 running. All other files are located at the following folder: C:\Program Files\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies\ If there are more files needed in some other scenarios then please leave me a comment here. And… don’t forget to convert the folder in IIS to application. While developing an application locally, this isn’t a problem. But when you are ready to deploy your application to a hosting provider, this might well be a problem if the hoster does not have the ASP.NET MVC assemblies installed in the GAC. Fortunately, ASP.NET MVC is still bin-deployable. If your hosting provider has ASP.NET 3.5 SP1 installed, then you’ll only need to include the MVC DLL. If your hosting provider is still on ASP.NET 3.5, then you’ll need to deploy all three. It turns out that it’s really easy to do so. Also, ASP.NET MVC runs in Medium Trust, so it should work with most hosting providers’ Medium Trust policies. It’s always possible that a hosting provider customizes their Medium Trust policy to be draconian. Deployment is easy when you know what to copy in archive for publishing your web site on ASP.NET MVC 3 or later versions. What I like to do is use the Publish feature of Visual Studio to publish to a local directory and then upload the files to my hosting provider. If your hosting provider supports FTP, you can often skip this intermediate step and publish directly to the FTP site. The first thing I do in preparation is to go to my MVC web application project and expand the References node in the project tree. Select the aforementioned three assemblies and in the Properties dialog, set Copy Local to True. Now just right click on your application and select Publish. This brings up the following Publish wizard Notice that in this example, I selected a local directory. When I hit Publish, all the files needed to deploy my app are available in the directory I chose, including the assemblies that were in the GAC. Another ASP.NET MVC 3 article: - New Features in ASP.NET MVC 3 - ASP.NET MVC 3 First Look

    Read the article

  • ???? : ?????????????????????????? ~??????????????????SCSK??????

    - by Shinji Watanabe
    ??????????????????????????????????????????·?????????????????????????????????????Web??????????????????????????????????·??????????????? ?????????????·???????????????????????Web CMS?EC????????????????????????????????????????????????????????Web??????????????????·???????????????Web??????????????????????????????·??????????Web???????????????? ???????? ?? 2012?10?10?(?)14:00~17:15(??13:30~) ?? Web?????????????????-·??????????????????????Web????????????????????????????????·??????????Web????????????????????????????Web???????????????Web?????????????????????????????? ?? ???????????????????2-4-27 ????? 9F???:??? ???? ?????? 10????? ??5???????? ??3?(????????????????)JR? ????? ?????? ??10?JR??? ?????? ????? ??5??? ???? ?????? 7????? ??5?????? ?? ?? ?? 50????????????????????????????????????????????????????????????????(??????????????????????????????????)?????????????????????????????????????????????????????????????????????????????????????????????????????1?2???????????????????????????????????? ?? ??????????????????????????SCSK???? ???????? ????????? ????? 13:30-14:00 ?? 14:00-14:10 1)??????????????? Fusion Middleware?????? ????/??????? ? ? 14:10-15:00 2)?User Experience ??? ~ ????????????????????????????????? ????????CEO ?? ????????????????????????????????????????????·?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? 15:00-15:45 3)?B2C?B2B?????!?????????????????????3?????????????? ?????????????? CRM/HCM???????????? ?? ?? ?E?????????????????????????????????????????????????????????????B2B?B2C?????????????????????????????????????????????? ????????????????????????????????????????????????????? ???????????????????????????????????????????3??????????? 15:45-16:00 ?? 16:00-16:45 4)????????????????No.1 ??????????????????????SCSK???? ?????????? ??????????????????????? WEB???????? ??? ? ? ????????????????Web????PfizerPRO??????????????????????????????????????????????????????????????????????????????????????????????????????????????? ????No.1(?)? ???? ???????????SCSK??????Web???????????????????????????????????????????????????????????????????????????????????????????????????? ???:??????·??·???2011?12?14??? 2011?7~9????????????????? 16:45-17:15 5)????Web????????????????~???????????????????????????~?SCSK???? IT??????????? ???????????????? ?? ??????????????????????????????????????Web????????????????????????????????????????????CMS????????Web???????????????????????????????????????????? ??????????????????????????????

    Read the article

  • Using the ASP.NET Membership API with SQL Server / SQL Azure: The new &ldquo;System.Web.Providers&rdquo; namespace

    - by Harish Ranganathan
    The Membership API came in .NET 2.0 and was a huge enhancement in building web applications with users, managing roles, permissions etc.,  The Membership API by default uses SQL Express and until Visual Studio 2008, it was available only through the ASP.NET Configuration manager screen (Website – ASP.NET Configuration) or (Project – ASP.NET Configuration) and for every application, one has to manually visit this place to start using the Security and other settings.  Upon doing that the default SQL Express database aspnet.mdf is created to store all the user profiles. Starting Visual Studio 2010 and .NET 4.0, the Default Website template includes the Membership API controls as a part of the page i.e. When you create a “File – New – ASP.NET Web Application” or an “ASP.NET MVC Application”, by default the Login/Register controls are enabled in the MasterPage and they are termed under “ApplicationServices” setting in the web.config file with connection string pointed to the SQL Express database. In fact, when you run the default website and click on “Logon” –> “Register”, and enter the details for registration and click “Register”, that is the time the aspnet.mdf file is created with the tables for Users, Roles, UsersInRoles, Profile etc., Now, this uses the default SQL Express database within the App_Data folder.  If you want to move your Membership information to some other database such as SQL Server, SQL CE or SQL Azure, you need to manually run the aspnet_regsql command and specify the destination database name. This would create all the Tables, Procedures and Views required to handle the Membership information.  Thereafter you can change the connection string for “ApplicationServices” to point to the database where you had run all the scripts. Now, enter “System.Web.Providers” Alpha. This is available as a part of the NuGet package library.  Scott Hanselman has a neat post describing the steps required to get it up and running as well as doing the basic changes  at http://www.hanselman.com/blog/IntroducingSystemWebProvidersASPNETUniversalProvidersForSessionMembershipRolesAndUserProfileOnSQLCompactAndSQLAzure.aspx Pretty much, it covers what the new System.Web.Providers do. One thing I wanted to clarify is that, the new “System.Web.Providers” add a lot of new settings which are also marked as the defaults, in the web.config.  Even now, they use SQL Express as the default database.  But, if you change the connection string for “DefaultConnection” under connectionStrings to point to your SQL Server or SQL Azure, Membership API would now be able to create all the tables, procedures and views at the destination specified (i.e. SQL Server or SQL Azure). In my case, I modified the DefaultConneciton to point to my SQL Azure database.  Next, I hit F5 to run the application.  The default view loads.  I clicked on “LogOn” and then “Register” since I knew there are no tables/users as of then.  One thing to note is that, I had put “NewDB” as the database name in the connection string that points to SQL Azure.  NewDB wasn’t existing and I would assume it would be created before the tables/views/procedures for Membership are created. Once I clicked on the “Register” to register my first username, it took a while and then registered as well as logged in me in.  Also, I went to the SQL Azure Management Portal and verified that there exists “NewDB” which has just been created I could also connect to the SQL Azure database “NewDB” from Management Studio and found that the tables now don’t have the aspnet_ prefix.  The tables were simply Users, Roles, UsersInRoles, Profiles etc., So, with a few clicks and configuration change, I could actually set up the user base for my application on SQL Azure and even make the SessionState, Roles, Profiles being stored in SQL Azure database. The new System.Web.Proivders also required MARS (MultipleActiveResultSets=true) setting since it uses Entity Framework for the DAL operations.  Also, the “Project – ASP.NET Configuration” screen can be used to further create/manage users/roles etc., although the data is stored on the remote database. With that, a long pending request from the community to have the ability to configure and use remote databases for Application users management without having to run the scripts from SQL Express is fulfilled. Cheers !!!

    Read the article

  • Spotlight on Oracle Social Relationship Management. Social Enable Your Enterprise with Oracle SRM.

    - by Pat Ma
    Facebook is now the most popular site on the Internet. People are tweeting more than they send email. Because there are so many people on social media, companies and brands want to be there too. They want to be able to listen to social chatter, engage with customers on social, create great-looking Facebook pages, and roll out social-collaborative work environments within their organization. This is where Oracle Social Relationship Management (SRM) comes in. Oracle SRM is a product that allows companies to manage their presence with prospects and customers on social channels. Let's talk about two popular use cases with Oracle SRM. Easy Publishing - Companies now have an average of 178 social media accounts - with every product or geography or employee group creating their own social media channel. For example, if you work at an international hotel chain with every single hotel creating their own Facebook page for their location, that chain can have well over 1,000 social media accounts. Managing these channels is a mess - with logging in and out of every account, making sure that all accounts are on brand, and preventing rogue posts from destroying the brand. This is where Oracle SRM comes in. With Oracle Social Relationship Management, you can log into one window and post messages to all 1,000+ social channels at once. You can set up approval flows and have each account generate their own content but that content must be approved before publishing. The benefits of this are easy social media publishing, brand consistency across all channels, and protection of your brand from inappropriate posts. Monitoring and Listening - People are writing and talking about your company right now on social media. 75% of social media users have written a negative post about a brand after a poor customer service experience. Think about all the negative posts you see in your Facebook news feed about delayed flights or being on hold for 45 minutes. There is so much social chatter going on around your brand that it's almost impossible to keep up or comprehend what's going on. That's where Oracle SRM comes in. With Social Relationship Management, a company can monitor and listen to what people are saying about them on social channels. They can drill down into individual posts or get a high level view of trends and mentions. The benefits of this are comprehending what's being said about your brand and its competitors, understanding customers and their intent, and responding to negative posts before they become a PR crisis. Oracle SRM is part of Oracle Cloud. The benefits of cloud deployment for customers are faster deployments, less maintenance, and lower cost of ownership versus on-premise deployments. Oracle SRM also fits into Oracle's vision to social enable your enterprise. With Oracle SRM, social media is not just a marketing channel. Social media is also mechanism for sales, customer support, recruiting, and employee collaboration. For more information about how Oracle SRM can social enable your enterprise, please visit oracle.com/social. For more information about Oracle Cloud, please visit cloud.oracle.com.

    Read the article

  • Are VMWare ESXi 5 patches cumulative?

    - by ewwhite
    It seems basic, but there's confusion about the patching strategy needed to manually update standalone VMWare ESXi hosts. The VMWare vSphere blog attempts to explain this, but it's still not clear. From the blog: Say Patch01 includes updates for the following VIBs: "esxi-base", "driver10" and "driver 44". And then later Patch02 comes out with updates to "esxi-base", "driver20" and "driver 44". P2 is cumulative in that the "esxi-base" and "driver44" VIBs will include the updates in Patch01. However, it's important to note that Patch02 not include the "driver 10" VIB as that module was not updated. Many of my ESXi installations are standalone and do not make use of Update Manager. It is possible to update an individual host using the patches make available through the VMWare patch download portal. The process is quite simple, and that part makes sense. The bigger issue is determining what to actually download and install. In my case, I have a good number of HP-specific ESXi builds that incorporate sensors and management for HP ProLiant hardware. Let's say that those servers start at ESXi build #474610 from 9/2011. Looking at the patch portal screenshot below, there is a patch for ESXi update01, build #623860. There are also patches for builds #653509 and #702118. Coming from the old version of ESXi, what is the proper approach to bring the system fully up-to-date? Which patches are cumulative and which need to be applied sequentially? Perhaps the download size is the confusing factor, but is installing the newest build the right approach, or do I need to step back and patch incrementally?

    Read the article

  • i5 540M or i7 720QM for laptop running VMs and software development tools?

    - by Donald Hughes
    I'm a software developer that would primarily be running Windows 7 as the primary operating system. On a typical day, I might, at any given moment, be running Visual Studio, Expression Web, SQL Server developer (and Management Console), IIS, Photoshop, a dozen browser tabs in 2-3 different browsers, Skype video chat, streaming music, and a couple of VMs (WinXP and Ubuntu) for testing/experimentation. Obviously, RAM is a concern, which is why I plan to use 8 GB so I can devote enough to the VMs to be usable. I'm also tempted to use an ExpressCard SSD for storing the VM disks to ease disk contention. And I know that that is asking a lot from a laptop, and I should just use a desktop, but I need to be able to take my work with me between several locations. It seems that at a reasonable price point, it comes down to the i5 540M versus the i7 720QM. I'm leaning toward the i7 since it would allow me to dedicate a whole hyperthreaded core to each VM, and still have two cores left for the primary OS. I've heard that the i5 has better battery life, but I'm curious for my scenario if there would be a meaningful difference. I don't usually work without a plug, but I do occasionally ride the train or fly and it would be nice to have at least 3 hours of juice for unusual circumstances. And, finally, for this usage scenario, would a dedicated video option be preferred over the i5's integrated video? It sounds like Visual Studio 2010 (and Windows 7) can take advantage of the video card.

    Read the article

  • How to use Salt Stack with minions all behind NAT (not publicly accessible, default salt ports not open)?

    - by MountainX
    Can Salt Stack minions communicate with the salt master from behind NAT/Firewalls, etc., using standard ports that would be open be default in all consumer NAT routers (and without the minions having a public DNS record or static IP)? I'm working my way through my first salt tutorial, and this is where I'm stuck. I am able to configure iptables on the Ubuntu salt-master. But I have no control over the routers/NAT that the minions will sit behind. So far I tried these settings: /etc/salt/master: publish_port: 465 ret_port: 443 /etc/salt/minion: master_port: 465 That did not work. Background: I have a custom developed application presently running on about 40 Kubuntu laptops (& more planned). Every few months I have to update the application. (Often this just amounts to replacing a .jar file, which requires root permissions.) I also have to run Ubuntu updates and a few other minor things. I've been doing it manually, one by one, using Team Viewer to log into each client. I would like to dramatically improve this process. The two options I'm aware of are either: use reverse ssh tunnels and bash scripts. I tested this and it works. But I don't get any of the reporting, etc., I would get with Salt Stack. use Salt Stack (or similar) management tool. But I need a really simple tool. I can't invest any time in a big learning curve. I looked at Puppet and a bunch of related tools. The only one I found that looked simple enough for me (so far) was Salt Stack. But I'm stuck now because my minion can't reach the salt-master, as stated above. I appreciate suggestions.

    Read the article

  • Self-hosting vs. Budget hosting - What are the economics?

    - by cdonner
    My current hosting provider (shared Linux, unlimited domains, < $10 per month, with about 20 sites) has been giving me a lot of grief lately. I am contemplating to just ditch them and repurpose the old Sun V20z that is sitting in my basement rack, and move the hosting in-house, literally. My math goes as follows: my company pays up to $80 a months for my home internet service, which would cover the upgrade from currently Fios to Comcast business internet with 5 static IPs. So this comes free. running the server will cost me about $180/year at the current rate of approx. $.2/kWh my time is free So, it seems that the my net cost of doing this would be about $80 anually, plus the work that goes into setup and maintenance. I will have to get email hosting somewhere, which I do not want to do myself. On the other side of the balance sheet, I'd likely get better uptime than my provider based on recent stats, will not get suspended and don't have to spend hours with customer support. Overall, I am not convinced. Has anybody actually done that? What was your experience, and did it pay off?

    Read the article

  • How can the little guys effectively learn and use puppet?

    - by drumfire
    Six months ago, in our not-for-profit project we decided to start migrating our system management to a Puppet controlled environment because we are expecting our number of servers to grow substantially between now and a year from now. Since the decision has been made our IT guys have become a bit too annoyed a bit too often. Their biggest objections are: "We're not programmers, we're sysadmins"; Modules are available online but many differ from one another; wheels are being reinvented too often, how do you decide which one fits the bill; Code in our repo is not transparent enough, to find how something works they have to recurse through manifests and modules they might have even written themselves a while ago; One new daemon requires writing a new module, conventions have to be similar to other modules, a difficult process; "Let's just run it and see how it works" Tons of hardly known 'extensions' in community modules: 'trocla', 'augeas', 'hiera'... how can our sysadmins keep track? I can see why a large organisation would dispatch their sysadmins to puppet courses to become puppet masters. But how would smaller players get to learn puppet to a professional level if they do not go to courses and basically learn it via their browser and editor?

    Read the article

  • Lightweight Linux distro that includes developer tools? (or, the most BSD-like Linux)

    - by RevAaron
    I cut my teeth on Minix and Slackware 1.1, but I've been in the OS X Wilderness for the last few years. I'm trying to standardize on a Linux distribution for personal and work-related use on less powerful laptops and under virtualization. So far, NetBSD and OpenBSD are the best fit for my purposes- but after plenty of frustration I've come to the conclusion that I need to stick with Linux to get the hardware and software support that comes with it. What I like about NetBSD/OpenBSD that I'd like to keep: X, but no default KDE, GNOME or XFCE! A sensible /etc and dot file setup- startx calls xinit, xinit looks for ~/.xinitrc; nothing more complicated than that is needed. Command line tools and file-based configuration: I shouldn't need a GUI to connect to a WAP. Decent selection of binary packages; building from source is OK, but nothing source-only like Gentoo. pkg_add (BSD) and apt-get both have treated me well in the past. Modest RAM and HDD requirements: boot + X + awesome+ two xterms takes up 80 MB on OpenBSD and 240 MB on Debian 5 and Crunchbang In my experience, most "lightweight" and Live CDs focus on a nice desktop environment crammed into a CD or USB stick; once you add build-essentials you end up with something just about as bloated as Ubuntu or Debian full install. Crunchbang is a great example. Thanks in advance for all suggestions!

    Read the article

  • How can a Linux Administrator improve their shell scripting and automation skills?

    - by ewwhite
    In my organization, I work with a group of NOC staff, budding junior engineers and a handful of senior engineers; all with a focus on Linux. One interesting step in the way the company grows talent is that there's a path from the NOC to the senior engineering ranks. Viewing the talent pool as a relative newcomer, I see that there's a split in the skill sets that tends to grow over time... There are engineers who know one or several particular technologies well and are constantly immersed... e.g. MySQL, firewalls, SAN storage, load balancers... There are others who are generalists and can navigate multiple technologies. All learn enough Linux (commands, processes) to do what they need and use on a daily basis. A differentiating factor between some of the staff is how well they embrace scripting, automation and configuration management methodologies. For instance, we have two engineers who do the bulk of Amazon AWS CloudFormation work, and another who handles most of the Puppet infrastructure. Perhaps a quarter of the engineers are adept at BASH shell scripting. Looking at this in the context of the incredibly high demand for DevOps skills in the job market, I'm curious how other organizations foster the development of these skills and grow their internal talent. Scripting doesn't seem like a particularly-teachable concept. How does a sysadmin improve their shell scripting? Is there still a place for engineers who do not/cannot keep up in the DevOps paradigm? Are we simply to assume that some people will be left behind as these technologies evolve? Is that okay?

    Read the article

  • managing a high traffic media sharing website

    - by Jordan Westerman
    i'm in the process of developing a website that i predict will generate a lot of traffic. the site will be similar to many other sites offering free media streaming: mp3's. we are going to start with a pretty minimal amount of media to share, but the basic idea is that artists will set up a profile page with music they have made available for consumers to visit the page and listen to the music. we are starting with just a handful of artists, but i think that this project will generate more and more artist pages. eventually i'd like to set it up so consumers can create personalized playlists. how can i best prepare server space and bandwidth capabilities? i have a small team of web designers and programmers working on the site, as i am pretty illiterate when it comes to site management. as the ring leader of this organization, i am more or less looking for financial requirements and monthly burn rate estimates. i don't have a ton of capitol to start with, putting together a business plan, but i am seeking investments. i have a game plan to grow fast enough to be successful, and slow enough to manage the financial growth requirements. any questions i may have failed to ask myself? is it realistic to start this project on a shared server, and upgrade? any financial advice you think i can use? i really appreciate any advice given, as this is my first business venture. thank you all in advance. Jordan Westerman D.B.A. Badfish Productions, LLC

    Read the article

  • Wireless USB keyboard and mouse can wake system, but then receiver is inactive

    - by BlueMonkMN
    I have a Microsoft brand USB device that acts as a receiver for a wireless Microsoft Keyboard and a wireless Mouse. When it's operating normally, there are LEDs on the device indicating Caps Lock, Num Lock and Function Lock, of which the latter 2 are usually lit. It is plugged into a Dell Isnpiron 531 with Windows 7 32-bit running on an AMD Athlon 64 X2 Dual Core processor 5000+. When the computer goes to sleep (the power indicator on the main box is flashing), I can wake it by moving the mouse. So far all is good. However, something changed in, I think, the past couple weeks (I suspect due to a Microsoft driver update problem). Before the change, after waking the computer, everything would operate normally as far as I could tell, but now after waking the computer, the receiver has no lights on, and the keyboard and mouse are completely unresponsive (which is odd, considering the mouse woke up the computer). There is a button on the receiver that's supposed to reset the wireless connection and flash the lights while it does so, but it has no effect in this state. It's like the receiver doesn't have power (but how would the system know I moved the mouse, unless the power was on until it woke up?). I have checked the BIOS/CMOS settings or whatever you call them, and did not see anything related to USB in the power management section. I have checked Windows 7 device manager and ensured that all the USB Root Hub devices have the setting unchecked for allowing the USB power to be turned off. Like I said, this was working before, and the only thing I can think of that's changed is applying Windows Updates.

    Read the article

  • Is the sysadmin/netadmin the defacto project planner at your organization?

    - by gft74
    At my company it has somehow over the past few years slowly become my job to come up with a project plan, milestones and time lines for deployment of developer applications. Typical scenario: My team receives a request for a new website/db combo and date for deployment. I send back a questionnaire for the developer to fill out on all the reqs for the site (ssl? db? growth projections etc.) After I get back all the information, the head of development wants a well developed document of what servers will it live on why those servers what is the time line for creating the resources step-by-step SOP for getting the application on the server and all related resources created (dns, firewall, load balancer etc.) I maybe just whining but it feels like this is something better suited to our Project Management staff (which we have) or to the developer. I understand that I need to give them a time-line on creating the resources, but still feel like this is overkill. We already produce documentation on where everything lives and track configuration changes to equipment. How do other sysadmin folks handle this?

    Read the article

  • Which database to use and system/db administration by layman [closed]

    - by blah
    So my friend and I got briliant ;) idea for a business. Since it is not predictable whether it will work out or not, we decided to keep cost as low as possible to start with, in particular not to hire anyone. If it will work out as expected it will generate enough profit to hire professionals in few months. But for the first few months we'll be doing everything by ourselfs. He's a business/finance major, and I'm a software developer, so obviously I have to take care of IT :) It will be a webapp, written in python/django. My questions regarding this project: 1) What database should I choose? I'm experienced with oracle, and have been working with SQL Server for a while, but both of them are too expensive(at least now). It's a developer experience, I've never done any dba stuff. I'm looking for something free(as in beer). Looks like MySql or PostgreSQL are most popular in this sector. I would appreciate any comments on which db to choose. I'm open to any suggestions(it doesn't have to be MySql or Postgre). Here's what I know about data: It will be almost dates and numbers, a little bit of text. Searched mainly by dates. Data will almost never be updated, mostly inserted and browsed. From 30k to 300k new records/month. 2) Servers. My idea is to rent two dedicated servers. During normal operation one would be a web server(debian/apache), other would be a db server(debian/?). My recovery plan is to install everything on both, and in case of trouble with one of machines just run everything on the other one. Does it even makes sense? Any other tips appreciated. Thanks.

    Read the article

  • Using gentoo, how does one stick -9999 ebuild to a specific svn revision?

    - by hurikhan77
    As an example given the django-9999 ebuild, to match the developers environment I need to checkout R12120 from trunk. Installing Django manually is not option due to package management reasons. But there is also no ebuild in portage for 1.2 beta versions. So I did the following: ESVN_OPTIONS="-r12120" emerge -1a django Which installed the required revision from svn. But this is cumbersome in a way. Is there some way to define this statically per ebuild, eg something like: DJANGO_SVN_REV="12120" in make.conf. This would be much cleaner in my eyes. Because next time I need to rebuild django for whatever reason, I need to remember: "Oh I wanted this to stick to a specific revision" and next question will be "err, f&!#$?%, what was it again?" What's the best way to go here? Keep in mind: Manually installing packages without package manager knowledge is no option Working around with manual emerge variable prefixing is no option Setting up a /etc/portage/package.env would be a way to go (as described here) but that seems pretty unsupported and kludgy to me and thus unpreferable Modifying make.conf would be a way to go Keeping the ebuild in an overlay would be an option

    Read the article

  • Map FTP folder to folder on different FTP server

    - by jolt
    In my team we work a lot with FTP. We upload and download files from several different servers daily. Currently every member of the team manages access credentials to each FTP server locally on their own machine. I am looking for a way to set up a central FTP server that we can connect to, and from there, navigate to folders that each represent one of the other FTP servers that we connect to daily. Something like this: In-house central FTP server: |- FolderA --> server A root folder |- FolderB --> server B root folder |- FolderC --> server C root folder A setup like this, would mean that we can manage access credentials on the central FTP server, and team members would only need to have the access credentials to the central FTP server, and from there they could navigate to the other servers through these "virtual" folders. We could potentially develop our own custom FTP server that just forward requests to the remote FTP servers, but i feel like something like this (or something similar) would already have been done. So I'm looking for pointers that could help us find software for Windows that could help us to simplify our current setup. Thank you! Similar (unanswered) question here: FTP management server

    Read the article

  • AD User Passwords expiring without any notifications?

    - by scooter133
    We setup password Policies in Active Directory to Expire peoples passwords after so many days. Well it looks like the time has come for the Expiration of the Passwords and people are getting locked out... There has been no warning of user passwords about to expire. They just come in to work and they cannot log in, the phones no longer connect, nothing. Reset the password and all is good. Some of the users are locked out, though most are not, they just cannot log in. On setting the password Expiration, I didn't see anything about nor warning the users of the impending expiration. Seems like it used to warn you 15 days or so before it would expire. Clients range from: WinXP, WinVista, Win7 and Server 2008R2 Remote Desktop Services. How can I make sure my users are warned of the Expiration? Resultant Set of Policy for User that was not prompted: Account Policies/Password Policy Policy Setting Winning GPO Enforce password history 10 passwords remembered Default Domain Policy Maximum password age 270 days Default Domain Policy Minimum password age 0 days Default Domain Policy Minimum password length 4 characters Default Domain Policy Password must meet complexity requirements Disabled Default Domain Policy Store passwords using reversible encryption Disabled Default Domain Policy Account Policies/Account Lockout Policy Policy Setting Winning GPO Account lockout duration 20 minutes Default Domain Policy Account lockout threshold 5 invalid logon attempts Default Domain Policy Reset account lockout counter after 15 minutes Default Domain Policy Local Policies/Audit Policy Policy Setting Winning GPO Audit account logon events Failure Default Domain Policy Audit account management Success, Failure Default Domain Policy Audit directory service access Success, Failure Default Domain Policy Audit logon events Failure Default Domain Policy Audit policy change Success, Failure Default Domain Policy Audit privilege use Failure Default Domain Policy Local Policies/Security Options Interactive Logon Policy Setting Winning GPO Interactive logon: Prompt user to change password before expiration 7 days Default Domain Policy

    Read the article

  • Recommendations for hosting large videos

    - by Clinton Blackmore
    I recently created and put a 45-minute, 300 MB video file on my website and told a mailing list about it. Checking my site stats, I see that I've used 20% of my "unlimited" bandwidth for the month. As I want to be able to have several videos like this, clearly, I need to consider other options. The appeal to hosting files as my own site (aside from the supposedly unlimited disk space and bandwidth), is to be able to have control over the format, resolution, and quality of the video(s), as well as to ensure that it is clear that I'm the copyright holder (although the videos will be under a creative commons license). I find that for the screencasts I'm making, having a high resolution (say 3/4 of 1024 * 768) really makes seeing what is going on on the screen easier. It is also always a plus to not have the experience marred by advertisements. One more wrench to throw in is that while the videos are non-commercial, they do promote a club, and it seems that that falls afoul of some terms of services (especially for free services; while free is very nice, I will certainly consider putting up some money.) What recommendations do you have for (fairly) long, high-resolution videos? Should I look in depth at sites like YouTube and Vimeo, should I be considering a filesharing site [I have no qualms with someone downloading the entire video first -- I wouldn't want to watch 45 minutes in my browser!], hosting files with Bittorent (ugh -- I think that'd reduce my audience), or should I be looking into other web hosts (and if so, who?)

    Read the article

  • Different approaches to share files over local network & playlists "collaboration"

    - by exTyn
    I know, that I can use Google to find methods to share files over local network [1]. But, I have never shared files over local network, and I want to do this in a good, professional way. Also, this could be a good community wiki, I think. Well, what I am asking for, is: what are pros and cons of different methods to sharing files ofver local network? In my case, I need to share files between Linux & Win 7, and I want it to be secure (= without access for anyone else but me & people in my room). Another question (connected with above topic) is about playing music over the local network. Let's say, I live with 2 other guys in a room, one of us have speakers and we want to collaborate in creating playlists (e.g. everyone is choosing 3 songs to be played). Is it possible? How to do this? I am asking this question on SuperUser, because it (question) is connected with hardware & software (network, connecting computers, software for managing playlists in network etc.). I think it is most accurate place for such question (I have considered SO and SF). [1] And I have already done this! But, I do not have an experience in this field (sharing files over local network), do I am asking about pros and cons.

    Read the article

  • How to schedule automatic (daily) snapshots of AWS EC2 Windows Instance?

    - by Stanley
    I have some Windows servers hosted on Amazon EC2. Some run Windows Server 2003 and other run Windows Server 2008. These are EBS-backed instances. Most of the instances also have some additional EBS-volumes attached. We want to schedule a daily snapshot of the windows machines (and also the attached EBS-volumes) to S3 so that we have daily backups available. One would think that this is a very common requirement and would be made available via the AWS Management Console, but alas, it is not. What approaches are available? How do I schedule daily snapshots on our Windows Servers? There are several scripting examples available online for Linux, but not so much for windows. I have had a look at http://sehmer.blogspot.com/2011/04/amazon-ec2-daily-snapshot-script-for.html as well as https://github.com/ronmichael/aws-snapshot-scheduler. Has anyone used one of these approaches and does it work? I have also considered a service like Skeddly which seems inexpensive at first glance but when you look at using it for several servers the price soon escalates to such a point where it seems a better option to create your own solution as you can then apply it to new servers in the future. With Skeddly we'll pay for each server. How do we schedule daily snapshots of our windows instances?

    Read the article

  • Different approaches to share files over local network

    - by exTyn
    I know, that I can use Google to find methods to share files over local network [1]. But, I have never shared files over local network, and I want to do this in a good, professional way. Also, this could be a good community wiki, I think. Well, what I am asking for, is: what are pros and cons of different methods to sharing files ofver local network? In my case, I need to share files between Linux & Win 7, and I want it to be secure (= without access for anyone else but me & people in my room). Another question (connected with above topic) is about playing music over the local network. Let's say, I live with 2 other guys in a room, one of us have speakers and we want to collaborate in creating playlists (e.g. everyone is choosing 3 songs to be played). Is it possible? How to do this? I am asking this question on SuperUser, because it (question) is connected with hardware & software (network, connecting computers, software for managing playlists in network etc.). I think it is most accurate place for such question (I have considered SO and SF). [1] And I have already done this! But, I do not have an experience in this field (sharing files over local network), do I am asking about pros and cons.

    Read the article

  • Is the sysadmin/netadmin the defacto project planner at your organization?

    - by user31459
    At my company it has somehow over the past few years slowly become my job to come up with a project plan, milestones and time lines for deployment of developer applications. Typical scenario: My team receives a request for a new website/db combo and date for deployment. I send back a questionnaire for the developer to fill out on all the reqs for the site (ssl? db? growth projections etc.) After I get back all the information, the head of development wants a well developed document of what servers will it live on why those servers what is the time line for creating the resources step-by-step SOP for getting the application on the server and all related resources created (dns, firewall, load balancer etc.) I maybe just whining but it feels like this is something better suited to our Project Management staff (which we have) or to the developer. I understand that I need to give them a time-line on creating the resources, but still feel like this is overkill. We already produce documentation on where everything lives and track configuration changes to equipment. How do other sysadmin folks handle this?

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >