Search Results

Search found 9250 results on 370 pages for 'weekend projects'.

Page 88/370 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Is there any descent open-source search engine solutions?

    - by Nazariy
    Few weeks ago my friend asked me how hard is it to launch your own search engine service with list of websites that suppose to be crawled time to time. First what come at my mind was Google Custom Search however pricing policy is quite tricky and would drain your budget if you reach 500K queries per year. Another solution I found here was SearchBlox, which can be compared to Google Mini service. It's quite good solution if you planing to cover search over small amount of websites but for larger projects it is not very handy. I also found few other search platforms like Lucene, Hadoop and Xapian which seems to be quite powerful solutions to reach Google search quality, and Nutch as a web crawler. As most of open-source projects they share same problem, luck of comprehensive guidance of usage, examples and it's expected that you are expert in this subject. I'm wondering if any of you using this solutions, which of them would you recommend, and what should I be aware of?

    Read the article

  • Where to set Visual studio 2013 property macros

    - by marcp
    I'm a new VS user. I've received some sample C++ projects working with a 3rd party API. They were saved in VS2012 format, but I have VS 2013. After conversion I find that there is an API specific macro defined in the project properties in the "Linker|General|Additional Library Directories" category. If I click on 'edit' I can replace the macro with an actual path, but how do I establish what the macro points to? In other words, how does one create a macro usable in multiple projects?

    Read the article

  • Getting out of the Helpdesk. Getting into the system administration.

    - by eric.s
    Today I found out that the job which would have been a promotion for me went to an outside candidate. The position is for a systems admin on our LAN team. I have been working on the helpdesk here for almost two years with another two years experience working it prior. While we are the first tier support we also work in AD, monitor and update certain servers and set up and deploy Windows/Mac images for those we support. What I am looking for here is what can I do to better myself to inside/outside places to move up this ladder? I take what I can from my job - working on projects that I can learn from - but what projects should I be volunteering for? What can I do for myself outside to make me a more viable candidate for a systems admin?

    Read the article

  • Vagrant-aws not provisioning

    - by SuperCabbage
    I'm trying to spin up and provision an EC2 instance with Vagrant, it successfully creates the instance up and I can then use vagrant ssh to SSH into the it but Puppet doesn't seem to carry out any provisioning. Upon running vagrant up --provider=aws --provision I get the following output Bringing machine 'default' up with 'aws' provider... WARNING: Nokogiri was built against LibXML version 2.8.0, but has dynamically loaded 2.9.1 [default] Warning! The AWS provider doesn't support any of the Vagrant high-level network configurations (`config.vm.network`). They will be silently ignored. [default] Launching an instance with the following settings... [default] -- Type: m1.small [default] -- AMI: ami-a73264ce [default] -- Region: us-east-1 [default] -- Keypair: banderton [default] -- Block Device Mapping: [] [default] -- Terminate On Shutdown: false [default] Waiting for SSH to become available... [default] Machine is booted and ready for use! [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/ => /vagrant [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/manifests/ => /tmp/vagrant-puppet/manifests [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/modules/ => /tmp/vagrant-puppet/modules-0 [default] Running provisioner: puppet... An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'default' machine. Please handle this error then try again: No error message I can then SSH into the instance by using vagrant ssh but none of my provisioning has taken place, so I'm assuming that errors have occured but I'm not being given any useful information relating to them. My Vagrantfile is as following; Vagrant.configure("2") do |config| config.vm.box = "ubuntu_aws" config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box" config.vm.provider :aws do |aws, override| aws.access_key_id = "REDACTED" aws.secret_access_key = "REDACTED" aws.keypair_name = "banderton" override.ssh.private_key_path = "~/.ssh/banderton.pem" override.ssh.username = "ubuntu" aws.ami = "ami-a73264ce" end config.vm.provision :puppet do |puppet| puppet.manifests_path = "manifests" puppet.module_path = "modules" puppet.options = ['--verbose'] end end My Puppet manifest is as following; package { [ 'build-essential', 'vim', 'curl', 'git-core', 'nano', 'freetds-bin' ]: ensure => 'installed', } None of the packages are installed.

    Read the article

  • Setting cmd with default command and user defined message

    - by gpuguy
    On button click in a winform application I am executing cmd.exe file (Using system("cmd.exe");), which opens perfectly fine and displays the following: Microsoft Windows XP [Version 6.1.7601] (C) Copyright 1985-2001 Microsoft Corp. C: A\Documents\Visual Studio 2010\Projects\WinformTest\WinformTest> What I want is when a user click the button, the cmd.exe open with default command and a message like this: Please change command options and press enter to get started experimenting C: A\Documents\Visual Studio 2010\Projects\WinformTest\WinformTest> reduction -x 33554432 -i Notice a new command and a message is already there. Can anybody tell me how to go for this?

    Read the article

  • after redmine install i see only the filesystem

    - by derty
    After installing redmine, i cann only access the filesystem! I reinstalled redmine 2-3 times in different ways. Used this "how to"s: http://www.redmine.org/projects/redmine/wiki/HowTo_Install_Redmine_using_Debian_package http://www.redmine.org/projects/redmine/wiki/HowTo_Install_Redmine_210_on_Debian_Squeeze_with_Apache_Passenger http://beeznest.wordpress.com/2012/09/20/installing-redmine-2-1-on-debian-squeeze-with-apache-modpassenger/ the webserver of 10.0.0.14 is going to be behind a reverse apache proxy. but for know i'm working directly in the system. This change wouldn't be a problem. I use this on a bunch of other services. The Database does exist and i can enter it. The configuration file config/database.yml is set up right, with the data i use to enter as redmineuser. So does one have an idea why it is not working like i wish?

    Read the article

  • In Djano, why do I get a 500 server error when browsing, but "python mysite.fcgi" from SSH works fin

    - by Jim
    If I browse to my site, I get a 500 "internal server error." However, if I SSH into my server and go to my site's folder and run "python mysite.fcgi" I see the HTML rendered fine. Obviously, something is wrong, but I'm not sure what. Here is my .htaccess file: AddHandler fastcgi-script .fcgi RewriteEngine On RewriteRule ^(media/.*)$ - [L] RewriteRule ^(static/.*)$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ mysite.fcgi/$1 [QSA,L] Here is my mysite.fcgi file: #!/usr/bin/python2.5 import sys, os sys.path.insert(0, "/kunden/homepages/34/[mydir]/htdocs/projects/django") sys.path.insert(1, "/kunden/homepages/34/[mydir]/lib/python/site-packages") os.chdir("/kunden/homepages/34/[mydir]/htdocs/projects/django/mysite") os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings' from django.core.servers.fastcgi import runfastcgi runfastcgi(["method=threaded", "daemonize=false"]) I'm setting this up on 1and1. It has been a pain, but I think I'm close.

    Read the article

  • Which AMI should I use as a base for a Django application?

    - by Edan Maor
    I'm starting development of a Django application, on Amazon's Web Services. I'm looking to build an instance that will serve the Django. I don't have much experience with such things, having only used a shared host before (WebFaction). So I'm wondering, which AMI should I use as a base? I'm assuming I want an Ubuntu AMI, possibly with certain things like Apache pre-installed? One minor point: I'm planning to serve several different Django projects from the same instance. I use virtualenv on my dev machine right now to separate the different projects, I'm assuming I'll do the same on EC2. Thanks!

    Read the article

  • use network drives as mount points during installation?

    - by ajsie
    is it possible to use network storage locations as mount points during installation? cause i want to separate system (ubuntu) with data (personal files). eg. if i have 5 computers i don't want to recreate /home/david 5 times. so i want to mount networkdrive/home to /home in local ubuntu server. so ALL users home folders could be used and maybe also networkdrive/projects to /projects. in that way its ok if i by accident repartitioned the local ubuntu server cause all data is not there on that server, but in the data server. is separating "data" from "logic" good in this case? and is it possible? what protocol should i use for the mapping over internet? (maybe the server is in Sweden, and the data is in Norway). thanks.

    Read the article

  • How do you set up redundant servers?

    - by user59240
    To the sysadmins out there, I'm trying to get an idea about how you go about maintaining redundant servers for small projects. The modest number of servers in my mind is two, and three main essential services come to mind: HTTP, mail and DNS. How do you automate this duplicity? Is rsync the tool of choice (again, for small projects)? In addition to common tools for these tasks, references to books and articles would be greatly appreciated. The more hands-on the approach, the better. Thanks!

    Read the article

  • Copy a file from source directory to target base directory and maintain source path

    - by Citizen Dos
    Forgive me, I am probably not using the right terms to describe the problem and misunderstanding the most basic usage for a couple of common commands. I have a simple find statement that is locating files that I want to copy. I want to tack on the -exec cp {} and have cp copy the file from the source directory to a new base directory, but include the full path. For example: "find . -name *.txt" locates /user/username/projects/source.txt "cp {} [now what?]" copies the file to /user/newuser/projects/source.txt

    Read the article

  • Multiple SSH private keys for the same host

    - by Sencha
    How can I store 2 different private SSH keys for the same host? I have tried 2 entries in /etc/ssh/ssh_config for the same host with the different keys, and I've also tried to put both keys in the same file and referencing it from one hosts setting, however both do not work. More detail: I'm running Ubuntu server (12.04) and I want to connect to GitHub via SSH to download the latest source for my projects. There are multiple projects running on the same server and each project has a GitHub repo with it's own unique deloyment key-pair. So the host is always the same (github.com) but the keys need to be different depending on which repo I'm using. Different /etc/ssh/ssh_config versions I have tried: Host github.com IdentityFile /etc/ssh/my_project_1_github_deploy_key StrictHostKeyChecking no Host github.com IdentityFile /etc/ssh/my_project_2_github_deploy_key StrictHostKeyChecking no and this with both keys in the same file: Host github.com IdentityFile /etc/ssh/my_project_github_deploy_keys StrictHostKeyChecking no I've had no luck with either. Any help would be greatly appreciated!

    Read the article

  • I cant browse php pages in my local server

    - by tibin mathew
    Hi, I cant browse php pages in my local server.Before it was working fine. But now i cant browse php pages, i can browse html pages and asp pages , no problems with that. But when i try to browse a php page its not loading. What will be the problem?? I am using windows 2000 advanced server and my web server is Tomcat please someone help me Guys i'm not getting anything in my browser, its just continue to loading Nothing showing in that page i'm not getting any 404 error or anything like that. its just continue to be loading for example consider my file is located under insider a folder named as myproject i can reach upto this http://localhost/projects/myproject but after that i cant browse php pages inside that... http://localhost/projects/myproject/index.php this will continue to be loading, and nothing shows in that page

    Read the article

  • Tracking costs within one AWS account

    - by caius howcroft
    I have what I'm sure is a very common problem. Our company has many projects and groups working for different clients. We do a lot of our development work in the cloud and deploy our solutions there. We have a VPC set up that isolates projects from each other in their own subnet and that VPC is getting a hardware VPN connection back to HQ. We need to keep track of the cost run up by every project. The way I currently implement this is by providing my own tools for starting and stopping instances which log which user (and thus which project) to bill the instance too. This works okay for BoxUsage costs but not for other costs. I could create a separate account for each project and use consolidated billing, this I think would allow me to pay once but track costs per "project", but I would then not be able to share common resources (like bring account B's running instances inside the same VPC). Does anyone have any suggestions? Cheers C

    Read the article

  • "ant" is not recognized as command in Windows

    - by user1294663
    This is my first time developing Android applications. I'm developing an Android app on Eclipse on Windows 7. I would like to run the Android app from the Windows 7 command line interface. I have my Android device connected to the PC. The workspace directory that I use to store the Android project is C:\Users\Guest\Desktop\Software Applications Development\Java\Android Moblie Applications Projects\Eclipse Indigo for Java EE x64-bit\project workspace I opened the command line interface and I changed the working directory to the Android workspace directory. cd C:\Users\Guest\Desktop\Software Applications Development\Java\Android Moblie Applications Projects\Eclipse Indigo for Java EE x64-bit\project workspace I included Android sdk platform tools directory into the PATH environment variable. c:\Users\admin\Android-sdks\platform-tools Then I entered this into the Windows 7 command line interface: ant debug I have this error message on the cmd: ant is not recognised as an internal or external command, operatable program or batch file. What is the solution to this problem?

    Read the article

  • TFS 2010 migration from one server to another

    - by Kabir Rao
    We have followed- http://msdn.microsoft.com/en-us/library/ms404869(v=vs.100).aspx every steps of this extremely poorly worded article. We are not able see Dashboards of SharePoint projects. In some cases(mostly scrum projects, i guess), i get "The Webpage can not be found". In other cases- Unable to refresh data for a data connection in the workbook. Try again or contact your system administrator. The following connections failed to refresh: TfsOlapReport Any help would be very much appreciated.

    Read the article

  • Simple Windows+Linux server provisioning? Chef/Puppet/Ansible etc

    - by Andrew
    I'm primarily a developer, part time devops; and manage servers here and there for my projects. I want to automate provisioning of web/app/database servers going forward for my projects I manage a mixture of both Windows and Linux servers (VPS, cloud and dedicated) I've looked at investigated Chef/Puppet/Ansible briefly; and I am wanting to find something that: Is easy to learn and understand. I don't want to invest weeks into understanding a complicated piece of tech. Ideally does not require a server ("master server") to hold the configurations Supports provisioning of Windows and Linux servers Comes with suitable documentation to get started Does anyone have any advice on what tool is best suited? Thanks

    Read the article

  • What to do with old hard drives?

    - by caliban
    I have over 100+ old hard drives, ranging from 100MB Quantums to 200GB WDs, most of them PATA, some SATA. Most still working. The squirrel mentality runs in my family - hoard everything, discard nothing. Thus, and this is a relevant question - any suggestions on how to put these drives to use (anything) instead of them just being deadweights and space takers around the office? Hopeful objectives and suggestions to keep in mind when you post an answer : Should showcase your geekiness, or plain fun, or serve a social purpose, or benefit the community. You do not need to limit your answer to only one hard drive - if your project needs all 100++, bring it on! Your answer need not be limited to one project per hard drive - if one hard drive can be used for multiple projects, bring it on! If additional accessories need be purchased, make sure they are common. Don't tell me to get a moon rock or something. The projects you suggested should serve a utility, and not just for decoration purposes.

    Read the article

  • Per-user vhost logging

    - by kojiro
    I have a working per-user virtual host configuration with Apache, but I would like each user to have access to the logs for his virtual hosts. Obviously the ErrorLog and CustomLog directives don't accept the wildcard syntax that VirtualDocumentRoot does, but is there a way to achieve logs in each user's directory? <VirtualHost *:80> ServerName *.example.com ServerAdmin [email protected] VirtualDocumentRoot /home/%2/projects/%1 <Directory /home/*/projects/> Options FollowSymlinks Indexes IndexOptions FancyIndexing FoldersFirst AllowOverride All Order Allow,Deny Allow From All Satisfy Any </Directory> Alias /favicon.ico /var/www/default/favicon.ico Alias /robots.txt /var/www/default/robots.txt LogLevel warn # ErrorLog /home/%2/logs/%1.error.log # CustomLog /home/%2/logs/%1.access.log combined </VirtualHost>

    Read the article

  • How should a small team using multiple OS's deploy over github?

    - by Toby
    We have a small development team that have recently moved to using github to host our projects. The team consists of three developers, 2 on Windows and 1 on Mac. I am currently researching the best way to deploy applications to our Linux servers (dev and production). Capistrano running locally would be ideal but from what I read this won't work for Windows machines. It looks like the best way is to use a post-receive hook in github, I can see how this would work for auto deploying to dev, but I don't see how we could then deploy to live. I have found paid projects like http://www.deployhq.com/ but it feels like something that a quick bit of code should be able to do for free, I just can't seem to get myself pointed in the right direction! I was wondering what would be considered best practice for small team deployment involving multiple local OS's and github.

    Read the article

  • How to tell Mercurial to never create hard links

    - by scrapdog
    I am planning to use Mercurial in the near future on some projects. These projects will normally reside in a directory on my Windows machine, but I will be sharing these directories using VirtualBox so I can work on them directly from within Linux. I understand that Mercurial will sometimes create hard links when cloning repositories. I'm not sure how a VirtualBox shared directory handles these hard links (or if it even can), so I'd rather just tell Mercurial to never attempt to make hard links and always make a copy. My question: how do I globally disable Mercurial from hard linking? (Although if someone has gotten Mercurial and VirtualBox shared folders to work nicely with hard linking, I'd like to hear about it!)

    Read the article

  • C# development with Mono and MonoDevelop

    - by developerit
    In the past two years, I have been developing .NET from my MacBook by running Windows XP into VM Ware and more recently into Virtual Box from OS X. This way, I could install Visual Studio and be able to work seamlessly. But, this way of working has a major down side: it kills the battery of my laptop… I can easiely last for 3 hours if I stay in OS X, but can only last 45 min when XP is running. Recently, I gave MonoDevelop a try for developing Developer IT‘s tools and web site. While being way less complete then Visual Studio, it provides essentials tools when it comes to developping software. It works well with solutions and projects files created from Visual Studio, it has Intellisence (word completion), it can compile your code and can even target your .NET app to linux or unix. This tools can save me a lot of time and batteries! Although I could not only work with MonoDevelop, I find it way better than a simple text editor like Smultron. Thanks to Novell, we can now bring Microsoft technology to OS X.

    Read the article

  • Tracking download of non-html (like pdf) downloads with jQuery and Google Analytics

    - by developerit
    Hi folks, it’s been quite calm at Developer IT’s this summer since we were all involved in other projects, but we are slowly comming back. In this post, we will present a simple way of tracking files download with Google Analytics with the help of jQuery. We work for a client that offers a lot of pdf files to download on their web site and wanted to know which one are the most popular. They use Google Analytics for a long time now and we did not want to have a second interface in order to present those stats to our client. So usign IIS logs was not a idea to consider. Since Google already offers us a splendid web interface and a powerful API, we deceided to hook up simple javascript code into the jQuery click event to notify Analytics that a pdf has been requested. (function ($) { function trackLink(e) { var url = $(this).attr('href'); //alert(url); // for debug purpose // old page tracker code pageTracker._trackPageview(url); // you can use the new one too _gaq.push(["_trackPageview",url]); //always return true, in order for the browser to continue its job return true; } // When DOM ready $(function () { // hook up the click event $('.pdf-links a').click(trackLink); }); })(jQuery); You can be more presice or even be sure not to miss one click by changing the selector which hooks up the click event. I have been usign this code to track AJAX requests and it works flawlessly.

    Read the article

  • [ASP.NET 4.0] Persisting Row Selection in Data Controls

    - by HosamKamel
    Data Control Selection Feature In ASP.NET 2.0: ASP.NET Data Controls row selection feature was based on row index (in the current page), this of course produce an issue if you try to select an item in the first page then navigate to the second page without select any record you will find the same row (with the same index) selected in the second page! In the sample application attached: Select the second row in the books GridView. Navigate to second page without doing any selection You will find the second row in the second page selected. Persisting Row Selection: Is a new feature which replace the old selection mechanism which based on row index to be based on the row data key instead. This means that if you select the third row on page 1 and move to page 2, nothing is selected on page 2. When you move back to page 1, the third row is still selected. Data Control Selection Feature In ASP.NET 3.5 SP1: The Persisting Row Selection was initially supported only in Dynamic Data projects Data Control Selection Feature In ASP.NET 4.0: Persisted selection is now supported for the GridView and ListView controls in all projects. You can enable this feature by setting the EnablePersistedSelection property, as shown below: Important thing to note, once you enable this feature you have to set the DataKeyNames property too because as discussed the full approach is based on the Row Data Key Simple feature but  is a much more natural behavior than the behavior in earlier versions of ASP.NET. Download Demo Project

    Read the article

  • Releasing Shrinkr – An ASP.NET MVC Url Shrinking Service

    - by kazimanzurrashid
    Few months back, I started blogging on developing a Url Shrinking Service in ASP.NET MVC, but could not complete it due to my engagement with my professional projects. Recently, I was able to manage some time for this project to complete the remaining features that we planned for the initial release. So I am announcing the official release, the source code is hosted in codeplex, you can also see it live in action over here. The features that we have implemented so far: Public: OpenID Login. Base 36 and 62 based Url generation. 301 and 302 Redirect. Custom Alias. Maintaining Generated Urls of User. Url Thumbnail. Spam Detection through Google Safe Browsing. Preview Page (with google warning). REST based API for URL shrinking (json/xml/text). Control Panel: Application Health monitoring. Marking Url as Spam/Safe. Block/Unblock User. Allow/Disallow User API Access. Manage Banned Domains Manage Banned Ip Address. Manage Reserved Alias. Manage Bad Words. Twitter Notification when spam submitted. Behind the scene it is developed with: Entity Framework 4 (Code Only) ASP.NET MVC 2 AspNetMvcExtensibility Telerik Extensions for ASP.NET MVC (yes you can you use it freely in your open source projects) DotNetOpenAuth Elmah Moq xUnit.net jQuery We will be also be releasing  a minor update in few weeks which will contain some of the popular twitter client plug-ins and samples how to use the REST API, we will also try to include the nHibernate + Spark version in that release. In the next release, not sure about the timeline, we will include the Geo-Coding and some rich reporting for both the User and the Administrators. Enjoy!!!

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >