Search Results

Search found 4851 results on 195 pages for 'hosting'.

Page 178/195 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Replacement for Hamachi for SVN access

    - by Piers
    My company has been using Hamachi to access our SVN repository for a number of years. We are a small yet widely distributed development team with each programmer in a different country working from home. The server is hosted by a non-techie in our central office. Hamachi is useful here since it has a GUI and supports remote management. This system worked well for a while, but recently I have moved to a country with poor internet speeds. Hamachi will no longer connect 99% of the time - instead I get a "Probing..." message that doesn't resolve. It's certain to be a latency issue, as the same laptop will connect without problems when I cross the border and connect using a different ISP with better speeds. So I really need to replace Hamachi with some other VPN/protocol that handles latency better. The techie managing the repository is not comfortable installing and configuring Apache or IIS, so it looks like HTTP is out. I tried to convince my boss to go for a web hosting company, but he doesn't trust a 3rd party with our source. Any other recommended options / experiences out there for accessing our SVN repos that would be as simple as Hamachi for setup; but be more tolerant of network latency issues?

    Read the article

  • Configurating JOOMLA's e-mail notification for new account

    - by Dion
    I'm using Joomla 1.5 to create a local site for my office. The site will be accessed locally via intranet, and my PC will be the localhost for the site. I'm using a Login pluggin, so that anyone who wanted to enter the site should create an account. In JOOMLA, all user who created their account for the first time will receive a notification e-mail like : "Hello pras, You have been added as a User to Information Center by an Administrator. This e-mail contains your username and password to log in to http://localhost/yaddayadda/ Username: hadisuryo.prasetio Password: xxxx Please do not respond to this message as it is automatically generated and is for information purposes only." but if the user click the URL in the mail, which is, "localhost/yaddayadda/" they will not be directed to my site, but to their own PC's localhost.... My question is : How can I Modified the e-mail or the site configuration so that the URL will not be "localhost/yaddayadda/" anymore, but will be "(My-IP adress)/yaddayadda" I'm not going to host my site to a web hosting service, just using my PC as a host. I've been trying to trace on each config and .ini files...it seems that i have to do something with the "JURI" function or the "$mosConfig_live_site" on the backlink.php file $mosConfig_absolute_path = JPATH_SITE; $mosConfig_live_site = JURI :: base(); $url_array = explode('/', $_SERVER['REQUEST_URI']); Can anyone give me assistance ? Thank You

    Read the article

  • How much memory I would need for something like this?

    - by Vdas Dorls
    If I create social portal similar to Twitter (not exactly twitter, but similar in my country), how much space would I need for images? I guess with 100 gb of disk space it won't be enough, so could you please give me some information how much I would exactly need? And is there any suggestions how should I add the profile images? Is there any tactics from programming side, when I could save some space uploading and hosting images for profile users? I guess, each time user changes images, it would be good to delete the previous image, correctly? In additional, how much disk space would be needed for 1000000 user profiles, if we have like 15 default images, and part of the users won't upload their own images, but use some of the default ones. So 3 questions - How much disk space I would need to hold a good social portal? Is there any suggested way to deal with pictures with PHP to save disk space? How much disk space would be needed for 1000000 user profile, if we have like 15 default images and part of the users won't upload their own, but instead use one of the 15 default images?

    Read the article

  • What is a good automated data import method for SQL Server?

    - by Joel Potter
    I'm in the process of porting some SQL Server 2005 databases to SQL Server 2008. One of these databases has an associated import application (Windows task) which uses SSIS with a DTS package to import a large dataset from an MS Access database nightly. In upgrading to SQL Server 2008, I discovered that I can't run the same console application which has been performing the imports due to the missing manageddts DLL in SQL Server 2008. It's several years old and in need of a rewrite for various reason, plus, I've been fairly unhappy with DTS in general. The original reason DTS was chosen was for speed (5 min import time compared to 30+ for ADO.NET). The format of the data to import is out of my control (the client likes Access). I would also like to be able to run the import from a machine completely separate from the server hosting SQL Server and preferably with minimal SQL features installed. Options I've considered: Creating an Access application to connect to both databases (SQL Server and Access) and perform the import (Ugh!) Revisiting ADO.NET to see if the original implementation was poorly written. Updated SSIS packages. What other technologies should I be considering for this job?

    Read the article

  • Script to install and compile Python, Django, Virtualenv, Mercurial, Git, LessCSS, etc... on Dreamho

    - by tmslnz
    The Story After cleaning up my Dreamhost shared server's home folder from all the cruft accumulated over time, I decided to start afresh and compile/reinstall Python. All tutorials and snippets I found seemed overly simplistic, assuming (or ignoring) a bunch of dependencies needed by Python to compile all modules correctly. So, starting from http://andrew.io/weblog/2010/02/installing-python-2-6-virtualenv-and-virtualenvwrapper-on-dreamhost/ (so far the best guide I found), I decided to write a set-and-forget Bash script to automate this painful process, including along the way a bunch of other things I am planning to use. The Script I am hosting the script on http://bitbucket.org/tmslnz/python-dreamhost-batch/src/ The TODOs So far it runs fine, and does all it needs to do in about 900 seconds, giving me at the end of the process a fully functional Python / Mercurial / etc... setup without even needing to log out and back in. I though this might be of use for others too, but there are a few things that I think it's missing and I am not quite sure how to go for it, what's the best way to do it, or if this just doesn't make any sense at all. Check for errors and break Check for minor version bumps of the packages and give warnings Check for known dependencies Use arguments to install only some of the packages instead of commenting out lines Organise the code in a manner that's easy to update Optionally make the installers and compiling silent, with error logging to file failproof .bashrc modification to prevent breaking ssh logins and having to log back via FTP to fix it EDIT: The implied question is: can anyone, more bashful than me, offer general advice on the worthiness of the above points or highlight any problems they see with this approach? (see my answer to Ry4an's comment below) The Gist I am no UNIX or Bash or compiler expert, and this has been built iteratively, by trial and error. It is somehow going towards apt-get (well, 1% of it...), but since Dreamhost and others obviously cannot give root access on shared servers, this looks to me like a potentially very useful workaround; particularly so with some community work involved.

    Read the article

  • what makes a Tomcat5.5 cannot be "aware" of new Java Web Applications?

    - by Michael Mao
    This is for uni homework, but I reckon it is more a generic problem to the Tomcat Server(version 5.5.27) on my uni. The problem is, I first did a skeleton Java Web Application (Just a simple Servlet and a welcome-file, nothing complicated, no lib included) using NetBeans 6.8 with the bundled Tomcat 6.0.20 (localhost:8084/WSD) Then, to test and prove it is "portable" and "auto-deploy-able", I cleaned and built a WSD.war file and dropped it onto my Xampp Tomcat (localhost:8080/WSD). The war extracted everything accordingly and I can see identical output from this Tomcat. So far, so good. However, after I tried to drop to war onto uni server, funny thing happens: uni server Even though I've changed the war permission to 755, it is simply not "responding". I then copied the extracted files to uni server, the MainServlet cannot be recognized from within its Context Path "/WSD", basically nothing works, expect the static index.jsp. I tried several times to stop and restart uni Tomcat, it doesn't help? I wonder what makes this happen? Is there anything I did wrong with my approach? To be frank I paid no attention to a server not under my control, and I am unfortunately not a real active day-to-day Java Programmer now. I understand the fundamentals of MVC, Servelets, JSPs, JavaBeans, but I really feel frustrated by this, as I cannot see why... Or, should I ask, a Java Web Application, after cleaned and built by NetBeans6.8, is self-contained and self-configured so ready to be deployed to any Java Web Container? I know, I can certainly program everything in plain old JSP, but this is soooo... unacceptable to myself... Update : I am now wondering if there is any free Tomcat Hosting so that I would like to see if my war file and/or my web app can go with them without any configuration at all?

    Read the article

  • Get subdomain of XML page from XSL

    - by fudgey
    I am working with a guild hosting site that provides an XML/XSL transformation widget where all I need to do is enter the URL for the XML and XSL and it does the rest. I have written an XSL transform to display a World of Warcraft armory page: Here is an example XML page (view source to see it) of the group I'm trying to help now. So, the user is entering their own XML page url (which has an eu subdomain in this case, but it is not within the XML itself). So when I make links to the character url, I need to add the entire url <a target="_blank" href="http://eu.wowarmory.com/character-sheet.xml?{@url}"> <xsl:value-of select="@name"/> </a> But I can't just set the domain to eu since there are multiple regions. Here are the possibilities: us = www, europe = eu, korea = kr, china = cn and taiwan = tw. Here is a snippet of the XML which shows the url parameters: <character achPoints="4275" classId="3" genderId="0" level="80" name="Virtex" raceId="4" rank="2" url="r=Drek%27Thar&amp;cn=Virtex"/> I guess I could just have the user add a small bit of HTML with their region in another part of their page, something like <div id="region">eu</div>, but I'm trying to make this work without any extra coding on their part. Edit: Ok, my question explicitly stated: How do I get the URL subdomain using XSL?

    Read the article

  • Why can't I simply copy installed Perl modules to other machines?

    - by pistacchio
    Being very new to Perl but not to dynamic languages, I'm a bit surprised at how not straight forward the manage of modules is. Sure, cpan X does theoretically work, but I'm working on the same project from three different machines and OSs (at work, at home, testing in an external environment). At work (Windows 7) I have problem using cpan because of our firewall that makes ftp unusable At home (Mac OS X) it does work In the external environment (Linux CentOs) it worked after hours because I don't have root access and I had to configure cpan to operate as a non-root user I've tried on another server where I have an access. If the previous external environment is a VPS and so I have a shell access, this other one is a cheap shared hosting where I have no way to install new modules other than the ones pre-installed At the moment I still can't install Template under Windows. I've seen that as an alternative I could compile it and I've also tried ActiveState's PPM but the module is not existent there. Now, my perplexity is about Perl being a dynamic language. I've had all these kind of problems while working, for example, with C where I had to compile all the libraries for all the platform, but I thought that with Perl the approach would have been very similar to Python's or PHP's where in 90% of the cases copying the module in a directory and importing it simply works. So, my question: if Perl's modules are written in Perl, why the copy/paste approach will not work? If some (or some part) of the modules have to be compiled, how to see in CPAN if a module is Perl-only or it relies upon compiled libraries? Isn't there a way to download the module (tar, zip...) and use cpan to deploy it? This would solve my problem under Windows.

    Read the article

  • AJAX Post Not Sending Data?

    - by Jascha
    I can't for the life of me figure out why this is happening. This is kind of a repost, so forgive me, but I have new data. I am running a javascript log out function called logOut() that has make a jQuery ajax call to a php script... function logOut(){ var data = new Object; data.log_out = true; $.ajax({ type: 'POST', url: 'http://www.mydomain.com/functions.php', data: data, success: function() { alert('done'); } }); } the php function it calls is here: if(isset($_POST['log_out'])){ $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('logOutSuccess')"; $connection->runQuery($query); // <-- my own database class... // omitted code that clears session etc... die(); } Now, 18 hours out of the day this works, but for some reason, every once in a while, the POST data will not trigger my query. (this will last about an hour or so). I figured out the post data is not being set by adding this at the end of my script... $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('POST FAIL')"; $connection->runQuery($query); So, now I know for certain my log out function is being skipped because in my database is the following data: if it were NOT being skipped, my data would show up like this: I know it is being skipped for two reasons, one the die() at the end of my first function, and two, if it were a success a "logOutSuccess" would be registered in the table. Any thoughts? One friend says it's a janky hosting company (hostgator.com). I personally like them because they are cheap and I'm a fan of cpanel. But, if that's the case??? Thanks in advance. -J

    Read the article

  • connecting to secure database from website host

    - by jim
    Hello all, I've got a requirement to both read and write data via a .net webservice to a sqlserver database that's on a private network. this database is currently accessed via a vpn connection by remote client software (on standard desktop machines) to get latest product prices and to upload product stock sales. I've been tasked with finding a way to centralise this access from a webservice that the clients then access, rather than them using the vpn route to connect directly to the database. My question is related to my .net service's relationship to the sqlserver database. What are the options for connecting to a private network vpn from a domain host in order to achive the functionality of allowing the webservice to both read and write data to the database. For now, I'm not too concerned about the client connectivity and security (tho i appreciate that this will have to be worked out too), I'm really just interested in discovering the options available in order to allow my .net webservice to connect to the private network in as painless and transparent a way as posible. The option of switching the database onto public hosting is not an option, so I have to work with the sdcenario as described above for now, unless there's a compelling rationale presented to do otherwise. thanks all... jim

    Read the article

  • URL equals and checking Internet access

    - by James P.
    On http://java.sun.com/j2se/1.5.0/docs/api/java/net/URL.html it states that: Compares this URL for equality with another object. If the given object is not a URL then this method immediately returns false. Two URL objects are equal if they have the same protocol, reference equivalent hosts, have the same port number on the host, and the same file and fragment of the file. Two hosts are considered equivalent if both host names can be resolved into the same IP addresses; else if either host name can't be resolved, the host names must be equal without regard to case; or both host names equal to null. Since hosts comparison requires name resolution, this operation is a blocking operation. Note: The defined behavior for equals is known to be inconsistent with virtual hosting in HTTP. According to this, equals will only work if name resolution is possible. Since I can't be sure that a computer has internet access at a given time, should I just use Strings to store addresses instead? Also, how do I go about testing if access is available when requested?

    Read the article

  • Jetty 7 will not allow me to customize a session cookie path

    - by Bob Obringer
    Using Jetty 7.0.2, I am unable to set a custom session cookie path. I am hosting multiple sites on the same server using apache to proxy requests to the proper context. (replaced http as htp as stackoverflow thinks my multiple links might be spam) <VirtualHost *:80> ServerName context.domain.com ProxyRequests On ProxyPreserveHost Off <Proxy *:80> Order deny,allow Allow from 127.0.0.1 </Proxy> ProxyPass / htp://localhost:8080/context/ ProxyPassReverse / htp://localhost:8080/context/ <Location /> Order allow,deny Allow from all </Location> </VirtualHost> Jetty is running on the same server on port 8080 and my context is available @ /context The user accesses the application @ htp://context.domain.com but jetty is setting the path for the session cookie @ /context. This prevents the browser from accessing the cookie since the the actual path to the context is not being used. I need to override Jetty's default setting to set the cookie for the context, and set the path at the root ( / ). In my Jetty's webdefault.xml I have the following, which is partially working: <context-param> <param-name>org.eclipse.jetty.servlet.SessionCookie</param-name> <param-value>CustomCookieName</param-value> </context-param> <context-param> <param-name>org.eclipse.jetty.servlet.SessionPath</param-name> <param-value>/</param-value> </context-param> The cookie is properly set with a custom name, but it is NOT setting the SessionPath. No matter what I set the value to... it refuses to set a cookie at any path but /context. This has been driving me crazy so any help would be greatly appreciated.

    Read the article

  • connecting to secure database on private network from website host

    - by jim
    Hello all, I've got a requirement to both read and write data via a .net webservice to a sqlserver database that's on a private network. this database is currently accessed via a vpn connection by remote client software (on standard desktop machines) to get latest product prices and to upload product stock sales. I've been tasked with finding a way to centralise this access from a webservice that the clients then access, rather than them using the vpn route to connect directly to the database. My question is related to my .net service's relationship to the sqlserver database. What are the options for connecting to a private network vpn from a domain host in order to achive the functionality of allowing the webservice to both read and write data to the database. For now, I'm not too concerned about the client connectivity and security (tho i appreciate that this will have to be worked out too), I'm really just interested in discovering the options available in order to allow my .net webservice to connect to the private network in as painless and transparent a way as posible. [edit] the webservice will also be available to the retail website in order for it to lookup product info as well as allocate stock transfers to the same sqlserver db. it will therefore be located on the same domain as the retail site The option of switching the database onto public hosting is not feasible, so I have to work with the scenario as described above for now, unless there's a compelling rationale presented to do otherwise. thanks all... jim

    Read the article

  • How to find Tomcat's PID and kill it in python?

    - by 4herpsand7derpsago
    Normally, one shuts down Apache Tomcat by running its shutdown.sh script (or batch file). In some cases, such as when Tomcat's web container is hosting a web app that does some crazy things with multi-threading, running shutdown.sh gracefully shuts down some parts of Tomcat (as I can see more available memory returning to the system), but the Tomcat process keeps running. I'm trying to write a simple Python script that: Calls shutdown.sh Runs ps -aef | grep tomcat to find any process with Tomcat referenced If applicable, kills the process with kill -9 <PID> Here's what I've got so far (as a prototype - I'm brand new to Python BTW): #!/usr/bin/python # Imports import sys import subprocess # Load from imported module. if __init__ == "__main__": main() # Main entry point. def main(): # Shutdown Tomcat shutdownCmd = "sh ${TOMCAT_HOME}/bin/shutdown.sh" subprocess.call([shutdownCmd], shell=true) # Check for PID grepCmd = "ps -aef | grep tomcat" grepResults = subprocess.call([grepCmd], shell=true) if(grepResult.length > 1): # Get PID and kill it. pid = ??? killPidCmd = "kill -9 $pid" subprocess.call([killPidCmd], shell=true) # Exit. sys.exit() I'm struggling with the middle part - with obtaining the grep results, checking to see if their size is greater than 1 (since grep always returns a reference to itself, at least 1 result will always be returned, methinks), and then parsing that returned PID and passing it into the killPidCmd. Thanks in advance!

    Read the article

  • asp.net on linux/mono broken? weird load then 404/resource error

    - by acidzombie24
    I am testing out my asp.net on mono's VM Ware Image using opensuse From the home page it says Trying out your own code You can test your own applications by connecting with the file manager on this machine to the machine hosting your application, and copying over the directory containing application and its associated files. To test your ASP.NET applications, copy your code to a new directory in /srv/www/htdocs , then visit the following url: http://localhost/directoryname/page.aspx Where directoryname is the directory where you deployed your application, and page.aspx is the initial page for your software, typically Default.aspx. Mirror: http://tortillaflatscafe.com/ I saw my app had errors. After i restarted the VM i can no longer run my app. Instead of getting error messages or anything i get this error instead. Server Error in '/myfoldername' Application The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /Default.aspx I tried /default.aspx, using sudo to set permissions to 777 recursively, restarting apache via terminal i could not get this error to go away. How do i fix this?

    Read the article

  • ASPX page renders differently when reached on intranet vs. internet?

    - by MattSlay
    This is so odd to me.. I have IIS 5 running on XP and it's hosting a small ASP.Net app for our LAN that we can access by using the computer name, virtual directory, and page name (http://matt/smallapp/customers.aspx), but you can also hit that IIS server and page from the internet because I have a public IP that my firewall routes to the "Matt" computer (like http://213.202.3.88/smallapp/customers.aspx [just a made-up IP]). Don't worry, I have Windows domain authentication is in place to protect the app from anonymous users. So all the abovea parts works fine. But what's weird is that the Border of the divs on the page are rendered much thicker when you access the page from the intranet, versus the internet, (I'm using IE8) and also, some of the div layout (stretching and such) acts differently. Why would it render different in the same browser based on whether it was reached from the LAN vs. the internet? It does NOT do this in FireFox. So it must be just an IE8 thing. All the CSS for the divs is right in the HTML page, so I do not think it is a caching matter of a CSS file. Notice how the borders are different in these two images: Internet: http://twitpic.com/hxx91 . Lan: http://twitpic.com/hxxtv

    Read the article

  • Patterns / Solutions to complicated Feature Management

    - by yclian
    Hi all, My company develops CDN / Web-Hosting solution. We have a middleware that's served as a business logic layer and exposes web service for the front-end. I would like to seek for a clean solution to feature management - there're uncertainties and ugly workarounds/solutions in the software that the dev would say "when it happens or is broken, we will fix it". For example, here're the following features that a web publisher can have: Sites limit Bandwidth limit SSL feature + SSL configuration per site If we downgrade a web publisher, when he's having 10 sites, down to 5 sites, we can choose not to suspend the rest of the 5 sites, or we shall prompt for suspension before the downgrade. For the case of bandwidth limit, the downgrade is easy, when the bandwidth check happens, if the publisher has it exceeded, then we will suspend his account. For the case of SSL feature. Every SSL configuration is tied to a site, what shall happen to these configuration object when the SSL feature is downgraded from enabled to disabled? So as you can see, there're many different situations and there are different ways of handling it. I can make a system that examines the impacts and prompts the user to make changes before the downgrade/upgrade. Or a system that ignores the impacts and just upgrade/downgrade. Bad. Or a system designed in a way that the client code need to be aware of the complex feature matrix (or I can expose a helper to the client code to check if a feature is not DEFUNCT) There can be many ways that I am still thinking but puzzled. I am wondering, how would you tackle this issue and is there any recommended patterns or books or software that you think I can refer to? Appreciate your help.

    Read the article

  • HTTP Compression problems on IIS7

    - by Jonathan Wood
    I've spent quite a bit of time on this but seem to be going nowhere. I have a large page that I really want to speed up. The obvious place to start seems to be HTTP compression, but I just can't seem to get it to work for me. After considerable searching, I've tried several variations of the code below. It kind of works, but after refreshing the browser, the results seem to fall apart. They were turning to garbage when the page used caching. If I turn off caching, then the page seems right but I lose my CSS formatting (stored in a separate file) and get an error that an included JS file contains invalid characters. Most of the resources I've found on the Web were either very old or focused on accessing IIS directly. My page is running on a shared hosting account and I do not have direct access to IIS7, which it's running on. protected void Application_BeginRequest(object sender, EventArgs e) { // Implement HTTP compression if (Request["HTTP_X_MICROSOFTAJAX"] == null) // Avoid compressing AJAX calls { // Retrieve accepted encodings string encodings = Request.Headers.Get("Accept-Encoding"); if (encodings != null) { // Verify support for or gzip (deflate takes preference) encodings = encodings.ToLower(); if (encodings.Contains("gzip") || encodings == "*") { Response.Filter = new GZipStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "gzip"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } else if (encodings.Contains("deflate")) { Response.Filter = new DeflateStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "deflate"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } } } } Is anyone having better success with this?

    Read the article

  • HTML not appearing in Emails

    - by John
    My website sends out html emails but most of my recipients are receiving them as HTML marked up source pages instead of the nice table layouts. The problem doesn't appear to be an email client issue since the emails display properly in web mail clients like gmail, yahoo, hotmail etc... They also display properly when viewed through outlook or thunderbird that are connected to gmail, yahoo, hotmail etc... However, I have one domain name that I registered with a shared hosting provider called 1and1.com. I tried viewing my emails through their webmail client, thunderbird and outlook, but in all three cases, only the html mark up showed up. Also, I assume most of my recipients use MS Outlook with MS Exchange Server because they are business/finance people. Unfortunately, I don't know how to get an email that's managed by an MS Exchange Server. I made sure I'm sending my emails with the following headers: MIME-Version: 1.0 Content-type: text/html; charset=iso-8859-1 Does anyone know what might be wrong? can anyone recommend a solution?

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • OpenSource Projects - Is there a site which lists projecs that need more developers?

    - by Jamie
    Morning/Afternoon/Evening all, Do any of you know of a website which lists opensource projects which are in need of more help? Let me elaborate, I would like to work on another open source project (I already work on a couple), however, it would be nice to have a site which lists lots of OS projects, their aims, deadlines, workload, how many more developers they are in need of etc. Of course, I could just pick a topic i'm interested in, find an OS project and then work on it, however, it would be nice to see a diversified list of projects. Primarily because some little known awesome projects get little attention and big projects such as jQuery forks, adium, gimp etc. etc. get a lot of attention because they are well known (and of course because they are great)and thus get a lot of developers working on them. It would be nice to see some little known projects getting more attention and thus hopefully drawing some people to work on them. Currently there are many websites hosting os projects, such as github, sourceforge, google code etc. A website to centralise all of this into one place and categorise it would be awesome. Let me know your thoughts please. I'm not looking for an answer per se, so I will mark it is as a community wiki. Your thoughts would be great.

    Read the article

  • ASP.NET - Accessing copied content

    - by James Kolpack
    I have a class library project which contains some content files configured with the "Copy if newer" copy build action. This results in the files being copied to a folder under ...\bin\ for every project in the solution. In this same solution, I've got a ASP.NET web project (which is MVC, by the way). In the library I have a static constructor load the files into data structures accessible by the web project. Previously I've been including the content as an embedded resource. I now need to be able to replace them without recompiling. I want to access the data in three different contexts: Unit testing the library assembly Debugging the web application Hosting the site in IIS For unit testing, Environment.CurrentDirectory points to a path containing the copied content. When debugging however, it points to C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE. I've also looked at Assembly.GetExecutingAssembly().Location which points to C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\c44f9da4\9238ccc\assembly\dl3\eb4c23b4\9bd39460_f7d4ca01\. What I need is to the physical location of the webroot \bin folder, but since I'm in a static constructor in the library project, I don't have access to a Request.PhysicalApplicationPath. Is there some other environment variable or structure where I can always find my "Copy if newer" files?

    Read the article

  • CMS for managing plain-text content, with tagging

    - by user575606
    Hi, We have some quite-specific requirements for our app that a CMS may help us with, and were hoping that someone may know of a CMS that matches these requirements (it's quite a laborous task to download each CMS and verify this manually). We want a CMS to allow users to create and manage articles, but storing the articles in plain-text only. All of the CMSs that we have looked at so far are geared towards creating HTML pages. We want the CMS to manage workflow (approval process), and tracking of history. The requirements for plain text only is that the intent is to allow business people to generate content which we are going to display in our Silverlight application - we don't want to go down the route of hosting and displaying arbitrary HTML in the app as we want the styling to be seamless with our app, amongst other reasons. We would also want to allow the user to be able to link between articles, but not to external sites (i.e. HTML with no formatting, or some other way of specifying article links), and the third requirement is the ability to tag articles and search on articles. Does anyone know of any non-HTML targetted CMS systems that may match these requirements? Thanks, Gary

    Read the article

  • Why is search functionality not working on this page?

    - by DaveDev
    we deliver micro-site content for our client. Our content is injected into a wrapper that is supplied by another developer. To deliver our content we host the wrapper as well as the content. The user can access this at http://fundcentre.newireland.ie/ (try a search for 'bloxham') For the other content that is not ours, the other developer hosts a similar (though slightly different) wrapper and delivers the content. the user accesses this here: http://www.newireland.ie/ (try a search for 'bloxham') The wrapper contains a search box, which does not work for us but it works for the other developer. I took a look at the network traffic with FireBug but it appears that when I do the search from the wrapper that we're hosting, I'm getting a "407 Proxy Access Denied" error. My guess is their proxy has a problem with the fact that the search is being conducted from a page hosted outside the scope of their proxy. It was also suggested that there were javascript errors on the page that were preventing the search from executing but I can't see any. Also, I don't think I'd get as far as the proxy error if that was the case. I don't really understand this stuff too well though, so could somebody with a bit more experience please take a look and maybe shed some light on this for me? Thanks.

    Read the article

  • Protecting sensitive entity data

    - by Andreas
    Hi, I'm looking for some advice on architecture for a client/server solution with some peculiarities. The client is a fairly thick one, leaving the server mostly to peristence, concurrency and infrastructure concerns. The server contains a number of entities which contain both sensitive and public information. Think for example that the entities are persons, assume that social security number and name are sensitive and age is publicly viewable. When starting the client, the user is presented with a number of entities, not disclosing any sensitive information. At any time the user can choose to log in and authenticate against the server, given the authentication is successful the user is granted access to the sensitive information. The client is hosting a domain model and I was thinking of implementing this as some kind of "lazy loading", making the first request instantiating the entities and later refreshing them with sensitive data. The entity getters would throw exceptions on sensitive information when they've not been disclosed, f.e.: class PersonImpl : PersonEntity { private bool undisclosed; public override string SocialSecurityNumber { get { if (undisclosed) throw new UndisclosedDataException(); return base.SocialSecurityNumber; } } } Another more friendly approach could be to have a value object indicating that the value is undisclosed. get { if (undisclosed) return undisclosedValue; return base.SocialSecurityNumber; } Some concerns: What if the user logs in and then out, the sensitive data has been loaded but must be disclosed once again. One could argue that this type of functionality belongs within the domain and not some infrastructural implementation(i.e. repository implementations). As always when dealing with a larger number of properties there's a risk that this type of functionality clutters the code Any insights or discussion is appreciated!

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >