Search Results

Search found 35433 results on 1418 pages for 'document based'.

Page 343/1418 | < Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >

  • open source knowledge base CMS system

    - by Thomi
    I'm looking for an open source knowledge base system that uses tags, rather than free-text search to identify articles (a lot like serverfault does). I've looked at twiki, which many people suggested, but haven't found what I'm looking for. Basically I want to be able to create and tag articles, and provide an easy way for anonymous users to search based on tags. Edit: OK, here's some more detail regarding what I want. Basically, all the knowledge base systems I have seen so far are a collection of articles, each article with a title. Most of them allow you to categorise articles into groups and sub-groups. Users of the system can search for information using a title search, for example "How do I print from AwesomeProduct?" - which then shows a list of any articles that match that search text. This is fine and dandy when your KB is for one version of the software product (the mythical AwesomeProduct ver 1.0). However, the development team then go ahead and create a new version (ver 2.0) that adds many new features and changes some existing features. Now, how do we support both products in the same KB? The Naive method is to copy all articles from 1.0, and update them for 2.0, adding and removing articles in 2.0 as required. We can then add text at the top of every 1.0 article that says: "this articles applies to 1.0 only, to see the 2.0 version, click here" (or something similar) The problem with articles being indexed in the system by title is that it's very hard to filter based on meta-data like version. What happens when we create version 3.0 or 4.0? The end-situation here is that you have a mess of articles. They're hard to search, hard to filter, and even harder to manage. The solution (it seems to me) is to use tags, rather than text as the article index mechanism. So articles can be tagged with a tag representing the software version, topic area etc. etc. Users can then filter based on tag - an example search might be "version_1 printing" - which straight away gives a list of articles with all these tags. So that's what I'm looking for - a KB system that uses tags, rather than text to index many articles. I'm sure I could build something with drupal, but I was hoping for something that worked out-of-the-box.

    Read the article

  • Easy to use JSON Web Service Hosts?

    - by Serguei Fedorov
    I saw this being used by someone in a college class once and cannot find anything that is analogous to it. I am not sure if this is the right place to ask about something like this, but hopefully I can get some direction. I want to write an app which uses web services that can obtain and push data back to the client apps. Right now I am gathering up the design and documentation of this app. Not having to code the web service myself would reduce development time by a lot; instead using an easy to setup web service that will be easy to setup and manage. Either XML based on JSON based is totally fine; though I would prefer JSON for its reduced overhead. Like I said I have seen this demonstrated before; you define the data structure to be stored and how it is treated. I cannot find the person who demonstrated this; hopefully maybe someone can suggest something? The service he used was free with a limited amount of requests allowed. EDIT: He was using an online service to do this not a script which is installed onto an existing web hosting account. Thank you!

    Read the article

  • Sync Banshee library data.

    - by Dom
    I use Banshee to organise my music, I particularly like its scoring system and I have smart playlists based on it. However, I have two versions of my music library, one on each of my computers. As one of the computers is small I only have a favourite set of songs on that computer rather than my whole collection. The computers are not on a local network, but I do use Ubuntu One for file sharing between them. Is there any way I can synchronise song data (play count, score, skip count ...) and playlist data (including smart playlists that include songs based on this data) between the two computers? This would only be relevant of course for the songs that exist on both computers, the songs that exist on only one would need to be ignored. I did consider putting the library data file (I think it is .xml but I'm not sure) into the shared file and creating a symbolic link to it, but then I wouldn't be able to have a different set of songs on each computer. Thank you.

    Read the article

  • Apache Configuration Issue - website without www going to default site

    - by Brian
    I have included a copy of my virtual host file for apache below. (However I have hidden the ip address and domain name for now) My problem is that the following work: www.mydomainnamehere.org www.mydomainnamehere.com mydomainnamehere.com This one doesn't work: mydomainnamehere.org - instead of going to the document root listed below, it goes to the document root of the default site. What could be causing this? <VirtualHost [ipaddresshidden]:80> ServerAdmin [email protected] ServerName mydomainnamehere.org ServerAlias www.mydomainnamehere.org ServerAlias mydomainnamehere.com ServerAlias www.mydomainnamehere.com DocumentRoot /home/www/mydomainnamehere.org/html/ ErrorLog /home/www/mydomainnamehere.org/logs/error.log CustomLog /home/www/mydomainnamehere.org/logs/access.log combined </VirtualHost>

    Read the article

  • Product Support News for Oracle Solaris, Systems, and Storage

    - by user12244613
    Hi System Support Customers, April Newsletter is now available The April, 2012 Newsletter for Oracle Solaris, Systems, and Storage is now available via document 1363390.1 *Requires a My Oracle Support account to access. Please take a few minutes to read the newsletter. The newsletter is the primary method of communication about what we in support would like you to be aware of. If you are not receiving the newsletter, it could be due to: (a) Your Oracle profile does not have the allow Oracle Communication selected (on oracle.com Sign In, or if logged in select "Account" and under your Job Role, check you have selected this box : [ ] Yes, send me e-mails in Oracle Products.... (b) you have not logged a service request during the last 12 months. Oracle is working to improve the distribution process and changes are coming and once they are ready I will write more about that. But today if you don't automatically receive the newsletter all you can do is save it as a favorite within My Oracle Support and come back on the 2nd of each month to check out the changes. This month I am really interested to find out from you is the Newsletter providing you the type of items that you are interested in. To gather some data on that, I have a small 2minute survey running on the newsletter or you can access it [ here ] Finally, if you think I am missing a topic in the Newsletter, let me know by taking the survey or suggesting a topic via this blog. Get Proactive Don't forget about being Proactive. The latest updates for Systems and Solaris pages in the Get Proactive area are now available. Check out document 432.1 and learn what proactive features are available for Systems and Solaris.

    Read the article

  • What are my choices for server side sandboxed scripting?

    - by alfa64
    I'm building a public website where users share data and scripts to run over some data. The scripts are run serverside in some sort of sandbox without other interaction this cycle: my Perl program reads from a database a User made script, adds the data to be processed into the script ( ie: a JSON document) then calls the interpreter, it returns the response( a JSON document or plain text), i save it to the database with my perl script. The script should be able to have some access to built in functions added to the scripting language by myself, but nothing more. So i've stumbled upon node.js as a javascript interpreter, and and hour or so ago with Google's V8(does v8 makes sense for this kind of thing?). CoffeeScript also came to my mind, since it looks nice and it's still Javascript. I think javascript is widespread enough and more "sandboxeable" since it doesn't have OS calls or anything remotely insecure ( i think ). by the way, i'm writing the system on Perl and Php for the front end. To improve the question: I'm choosing Javascript because i think is secure and simple enough to implement with node.js, but what other alternatives are for achieving this kind of task? Lua? Python? I just can't find information on how to run a sandboxed interpreter in a proper way.

    Read the article

  • Desktop goes un-usable after upgrade to 12.04

    - by Tom Nail
    I have multiple Ubuntu systems connected to a KVM, one of which I recently upgraded from Ubuntu 10 to 12.04. After the upgrade, this system desktop does fine until it is allowed to go to idle (i.e., I've switched to another system on the KVM and it locks it's desktop). When I come back to it, the screen is garbled and paging across at a rate seemingly determined by the mouse. Although no pointer is visible, I can get the screen to stop paging (and just be garbled) by moving the mouse left and right. The paging will slow down and come to a stop, if I can align things carefully enough. This condition persists even when I try to go to a CLI-based login (e.g., CTRL+Alt+F1) and will continue until I reboot the machine. Unfortunately, I'm not very familiar with the Unity desktop, so I don't know where to find things to troubleshoot. A restart of lightdm doesn't change anything, so I'm wondering if this might be more hardware based( although this machine hasn't given me any trouble previously in the same setup). The .xsession-errors file has some issues with compiz, nautilus and GConf listed, but I'm not sure those are actually germane to the issue. Thanks for any help, -=Tom

    Read the article

  • VBScript and Xpath excluding duplicates [closed]

    - by Malachi
    I am trying to pull names from an XML Document using a vbscript. XML Document structure <Aliases> <Alias PartyType="DF" CaseID="000000" NameType=""> Name Name</Alias> <Alias PartyType="DF" CaseID="000000" NameType=""> Name Name</Alias> <Alias PartyType="DF" CaseID="000000" NameType=""> Name Name</Alias> ... </Aliases> the XML File might have 100 rows with the same name coming from several different CaseID's because for this part of my vbscript I am trying to pull all the different Names from all cases, but here is the issue, I don't want to return duplicates. is there a way to do this with an xPath Expression or should I try to do this with VBScript? EDIT I am pretty sure that I am going to have to do this with VBScript. Would it be Faster and more efficient to solve this issue in VBScript, xPath, or in populating the XML I am retrieving information from ( this might prove more difficult than the other two options ) I am also asking a Similar question on stackoverflow

    Read the article

  • Preventing ugly hyperlinks in Word-generated PDFs?

    - by Jay Levitt
    I'm creating a document in Word 2007 on Windows XP, and using the "Save As PDF" add-in. The document contains hyperlinks. When I open that PDF in Preview.app on a Mac (OS X 10.5.8), I see ugly boxes around all the hyperlinks. I've tried editing the PDF in Acrobat Pro 9.2.0 on the Mac, but the boxes don't show up there. If I select a hyperlink anyway with the Link Tool, right-click, and select "Properties..." no properties dialog ever appears. I want the links to be clickable, but I want them to look decent. How can I fix them? I don't have Acrobat for Windows.

    Read the article

  • Is there a canonical source supporting "all-surrogates"?

    - by user61852
    Background The "all-PK-must-be-surrogates" approach is not present in Codd's Relational Model or any SQL Standard (ANSI, ISO or other). Canonical books seems to elude this restrictions too. Oracle's own data dictionary scheme uses natural keys in some tables and surrogate keys in other tables. I mention this because these people must know a thing or two about RDBMS design. PPDM (Professional Petroleum Data Management Association) recommend the same canonical books do: Use surrogate keys as primary keys when: There are no natural or business keys Natural or business keys are bad ( change often ) The value of natural or business key is not known at the time of inserting record Multicolumn natural keys ( usually several FK ) exceed three columns, which makes joins too verbose. Also I have not found canonical source that says natural keys need to be immutable. All I find is that they need to be very estable, i.e need to be changed only in very rare ocassions, if ever. I mention PPDM because these people must know a thing or two about RDBMS design too. The origins of the "all-surrogates" approach seems to come from recommendations from some ORM frameworks. It's true that the approach allows for rapid database modeling by not having to do much business analysis, but at the expense of maintainability and readability of the SQL code. Much prevision is made for something that may or may not happen in the future ( the natural PK changed so we will have to use the RDBMS cascade update funtionality ) at the expense of day-to-day task like having to join more tables in every query and having to write code for importing data between databases, an otherwise very strightfoward procedure (due to the need to avoid PK colisions and having to create stage/equivalence tables beforehand ). Other argument is that indexes based on integers are faster, but that has to be supported with benchmarks. Obviously, long, varying varchars are not good for PK. But indexes based on short, fix-length varchar are almost as fast as integers. The questions - Is there any canonical source that supports the "all-PK-must-be-surrogates" approach ? - Has Codd's relational model been superceded by a newer relational model ?

    Read the article

  • This operation has been cancelled due to restrictions in effect on this computer

    - by Dan
    I have this HUGELY irritating problem on Windows 7 (x64). Whenever I click on ANY link (that exists on a Word document, Excel or Outlook), I get an alert box with the message: This operation has been canceled due to restrictions in effect on this computer I have been scouring my settings and the Internet for a solution, but to no avail. What is the reason for this problem? It even happens when I click anchors in word document. That is, I can't even click on an entry in a Table of Contents to go to the appropriate page - I get this same error then. Is this a Windows 7 thing? Is there any way to turn this off?

    Read the article

  • cleaning up pdftotext font issues

    - by mankoff
    I'm using pdftotext to make an ASCII version of a PDF document (made with LaTeX), because collaborators prefer a simple document in MS word. The plain text version I see looks good, but upon closer inspection the f character seems to be frequently mis-converted depending on what characters follow. For example, fi and fl often seem to become one special character, which I will try to paste here: ? and ?. What is the best way to clean up the output of pdftotext? I am thinking sed might be the right tool, but am not sure how to detect these special characters.

    Read the article

  • SearchServer2008Express Search Webservice

    - by Mike Koerner
    I was working on calling the Search Server 2008 Express search webservice from Powershell.  I kept getting <ResponsePacket xmlns="urn:Microsoft.Search.Response"><Response domain=""><Status>ERROR_NO_RESPONSE</Status><DebugErrorMessage>The search request was unable to connect to the Search Service.</DebugErrorMessage></Response></ResponsePacket>I checked the user authorization, the webservice search status, even the WSDL.  Turns out the URL for the SearchServer2008 search webservice was incorrect.  I was calling $URI= "http://ss2008/_vti_bin/spsearch.asmx?WSDL"and it should have been$URI= "http://ss2008/_vti_bin/search.asmx?WSDL"Here is my sample powershell script:# WSS Documentation http://msdn.microsoft.com/en-us/library/bb862916.aspx$error.clear()#Bad SearchServer2008Express Search URL $URI= "http://ss2008/_vti_bin/spsearch.asmx?WSDL"#Good SearchServer2008Express Search URL $URI= "http://ss2008/_vti_bin/search.asmx?WSDL"$search = New-WebServiceProxy -uri $URI -namespace WSS -class Search -UseDefaultCredential $queryXml = "<QueryPacket Revision='1000'>  <Query >    <SupportedFormats>      <Format revision='1'>urn:Microsoft.Search.Response.Document.Document</Format>    </SupportedFormats>    <Context>      <QueryText language='en-US' type='MSSQLFT'>SELECT Title, Path, Description, Write, Rank, Size FROM Scope() WHERE CONTAINS('Microsoft')</QueryText>      <!--<QueryText language='en-US' type='TEXT'>Microsoft</QueryText> -->    </Context>  </Query></QueryPacket>" $statusResponse = $search.Status()write-host '$statusResponse:'  $statusResponse $GetPortalSearchInfo = $search.GetPortalSearchInfo()write-host '$GetPortalSearchInfo:'  $GetPortalSearchInfo $queryResult = $search.Query($queryXml)write-host '$queryResult:'  $queryResult

    Read the article

  • Retrieve malicious IP addresses from Apache logs and block them with iptables

    - by Gabriel Talavera
    Im trying to keep away some attackers that try to exploit XSS vulnerabilities from my website, I have found that most of the malicious attempts start with a classic "alert(document.cookie);\" test. The site is not vulnerable to XSS but I want to block the offending IP addresses before they found a real vulnerability, also, to keep the logs clean. My first thought is to have a script constantly checking in the Apache logs all IP addresses that start with that probe and send those addresses to an iptables drop rule. With something like this: cat /var/log/httpd/-access_log | grep "alert(document.cookie);" | awk '{print $1}' | uniq Why would be an effective way to send the output of that command to iptables? Thanks in advance for any input!

    Read the article

  • Reliable 1tb or larger hard drive?

    - by jasondavis
    I am in the market for 2-3 new drives, I would like each to be at least 1tb to 2tb in size. I have been reading all the reviews on newegg.com for 1tb and larger drives and they all have 1 thing in common. Almost all the ones I read about have complaints of them being DOA or dieing within a few weeks of use. I am hoping to find some drives with this storage range that have a reputation for lasting a long time instead of a short life. Please help me if you have any experience with these sort of drives? Most the ones I read about were Western Digital brand. I realize some might complain that this questions answer would be based upon a timeframe, so if a user searches and find this answer a year from now it will be outdated but I would appreciate any help based on the current hard drives available as of April 10th, 2010 on newegg.com

    Read the article

  • How to stop Word 2011 opening hyperlinks on click?

    - by John Yeates
    In previous versions of MS Word, there was a preference for the action to be taken when the user clicked a hyperlink: open it, or edit it. Word 2011 appears to have defaulted to opening the hyperlink, and I can't find the preference to change this behaviour. How can I change Word's default behaviour when a hyperlink is clicked to be editing the text of the hyperlink? Holding down a modifier key when clicking is not an acceptable solution, as the aim here is to prevent misclicks from causing web pages to open. Edit: the links need to stay as links in the saved document. But when clicked on my machine, they should not open; Word needs to default to just editing the link, so an inaccurate click does not take me out of the document into Safari. Older versions of Word had a preference controlling this, and Microsoft seem to have removed it and fixed the behaviour at the unsafe option in order to satisfy the point-and-drool crowd.

    Read the article

  • Will new Twitter API 1.1 allow hashtag/tweet/trend queries without any authentication, i.e. for a client that does not use an user's account at all?

    - by P5music
    I see that, even not being logged in Twitter with an account, if I google hashtags or twitter accounts, twitter show them. I think it should be also possible to get those tweets programmatically but I do not know it for sure, so I ask for confirmation here, especially for the future with the new Twitter API resctrictions. I mean, will it be possible to get tweets from hashtags or accounts without logging in an user account, and so not wanting to access the user settings, subscriptions, etc (because I do not need it), thus not having to respect any token limit? I found these API 1.1 faqs, have I to be concerned? Will an application have to request user authorization just to make public API calls? When API v1.1 is released, user authorization (and access tokens) are required for all API 1.1 requests. In the weeks following release, some methods will require only application-based authentication for certain "userless" contexts. Will an application have to request user authorization just to make public API calls? When API v1.1 is released, user authorization (and access tokens) are required for all API 1.1 requests. In the weeks following release, some methods will require only application-based authentication for certain "userless" contexts. Will the Search API require authentication? The Search API is now part of the official REST API in version 1.1. In addition to serving results in a format consistent with other Tweet resources, usage will also require authentication.

    Read the article

  • How can I view a PDF in Firefox when the server specifies the wrong content type?

    - by Sam
    I am using Mozilla Firefox with a PDF viewer plug-in. The plug-in has been correctly associated with Adobe Reader files to view them in the browser in the settings. I would like to be able to view PDF files in Firefox rather than downloading them. This already works correctly when a web server indicates that a file has the Content-Type of application/pdf. However, some web servers provide other Content-Types for PDFs, such as application/octet-stream. (See this example of a PDF served with a non-pdf Content-Type.) I have looked at Firefox's MimeTypes.rdf file, and it appears to only support mapping applications based on file types for non-Internet-based files. How can I have Firefox view all PDF documents in-browser rather than only the ones with the application/pdf Content-Type?

    Read the article

  • With the outcome of the Oracle vs Google trial, does that mean Mono is now safe from Microsoft [closed]

    - by Evan Plaice
    According to the an article on ArsTechnica the judge of the case ruled that APIs are not patent-able. He referred to the structure of modules/methods/classes/functions as being like libraries/books/chapters. To patent an API would be putting a patent on thought itself. It's the internal implementations that really matter. With that in mind, Mono (C# clone for Linux/Mac) has always been viewed tentatively because, even though C# and the CLI are ECMA standards, Microsoft holds a patent on the technology. Microsoft holds a covenant not to sue open source developers based on their patents but has maintained the ability to pull the plug on the Mono development team if they felt the project was a threat. With the recent ruling, is Mono finally out of the woods. A firm precedent has been established that patents can't be applied to APIs. From what I understand, none of the Mono implementation is copied verbatim, only the API structure and functionality. It's a topic I have been personally interested in for years now as I have spent a lot of time developing cross-platform C# libraries in MonoDevelop. I acknowledge that this is a controversial topic, if you have opinions that's what commenting is for. Try to keep the answers factual and based on established sources.

    Read the article

  • ODBC in SSIS 2012

    - by jamiet
    In August 2011 the SQL Server client team published a blog post entitled Microsoft is Aligning with ODBC for Native Relational Data Access in which they basically said "OLE DB is the past, ODBC is the future. Deal with it.". From that blog post:We encourage you to adopt ODBC in the development of your new and future versions of your application. You don’t need to change your existing applications using OLE DB, as they will continue to be supported on Denali throughout its lifecycle. While this gives you a large window of opportunity for changing your applications before the deprecation goes into effect, you may want to consider migrating those applications to ODBC as a part of your future roadmap.I recently undertook a project using SSIS2012 and heeded that advice by opting to use ODBC Connection Managers rather than OLE DB Connection Managers. Unfortunately my finding was that the ODBC Connection Manager is not yet ready for primetime use in SSIS 2012. The main issue I found was that you can't populate an Object variable with a recordset when using an Execute SQL Task connecting to an ODBC data source; any attempt to do so will result in an error:"Disconnected recordsets are not available from ODBC connections." I have filed a bug on Connect at ODBC Connection Manager does not have same funcitonality as OLE DB. For this reason I strongly recommend that you don't make the move to ODBC Connection Managers in SSIS just yet - best to wait for the next version of SSIS before doing that.I found another couple of issues with the ODBC Connection Manager that are worth keeping in mind:It doesn't recognise System Data Source Names (DSNs), only User DSNs (bug filed at ODBC System DSNs are not available in the ODBC Connection Manager)  UPDATE: According to a comment on that Connect item this may only be a problem on 64bit.In the OLE DB Connection Manager parameter ordinals are 0-based, in the ODBC Connection Manager they are 1-based (oh I just can't wait for the upgrade mess that ensues from this one!!!)You have been warned!@jamiet

    Read the article

  • Computer becomes unreachable on lan after some time

    - by Ashfame
    I work on my laptop and ssh into my desktop. I use a lot of key based authentication for many servers for work but recently I couldn't login because ssh would pick up and try all the keys and it stops trying before ultimately falling back to password based login. So right now I am using this command: ssh -X -o PubkeyAuthentication=no [email protected] #deskto The issue is after sometime the desktop would just become unreachable from laptop. I won't be able to open its localhost through IP and today I tried ping'in it and found a weird thing. Instead of 192.168.1.4, it tries to ping 192.168.1.3 which I am sure is the root cause as it just can't reach 192.168.1.4 when its actually trying for 192.168.1.3 Ping command output: ashfame@ashfame-xps:~$ ping 192.168.1.4 PING 192.168.1.4 (192.168.1.4) 56(84) bytes of data. From 192.168.1.3 icmp_seq=1 Destination Host Unreachable From 192.168.1.3 icmp_seq=2 Destination Host Unreachable From 192.168.1.3 icmp_seq=3 Destination Host Unreachable From 192.168.1.3 icmp_seq=4 Destination Host Unreachable From 192.168.1.3 icmp_seq=5 Destination Host Unreachable From 192.168.1.3 icmp_seq=6 Destination Host Unreachable From 192.168.1.3 icmp_seq=7 Destination Host Unreachable From 192.168.1.3 icmp_seq=8 Destination Host Unreachable From 192.168.1.3 icmp_seq=9 Destination Host Unreachable ^C --- 192.168.1.4 ping statistics --- 10 packets transmitted, 0 received, +9 errors, 100% packet loss, time 9047ms pipe 3 Also the ping command message comes in multiple and not one by one. (izx answer's the weirdness I thought there was in ping command.) I did check for desktop, its local IP is still the same, so something is going on in my laptop. Any ideas? P.S. - Laptop runs Ubuntu 12.04 & Desktop runs Ubuntu 11.10 Laptop is connected through wifi to router and Desktop is connected through LAN to router. Update: Even after setting up static IP leases in router settings, I again ran into this issue.

    Read the article

  • SEO & Multilingual: would be this a good practise?

    - by Younès
    I am currently making a bilingual website and I'd like to get nice SEO results of course. Here's my idea: The internal links would be composed of the "www" subdomain so that people can share links regardless of their language. Anyway, their language is determined by the HTTP_ACCEPT_LANGUAGE PHP variable. So, they would see http:// www.site.com/mydocument/123 in their adress bar and never see any links like "http:// fr.site.com/mydocument/123" or "http://en.site.com/mydocument/123" The user can always switch the page's language thanks to links in the footer. The switching language link would be : http:// fr.site.com/mydocument/123 , and clicking on it would change his language session and redirects the user to http:// www.site.com/mydocument/123 In case of a crawling bot: I read that if the HTTP_USER_LANGUAGE variable was missing then it's a crawling bot. So, in that case, we set the defaut language as English. Each page, as I mentionned earlier, has a link for another language: On the page: http:// www.site.com/document/1323, the link http:// fr.site.com/document/1323 can be seen by the bot and be crawled. What do you think about this practise ? Would I get good SEO results for each language ?

    Read the article

  • Openfire Installation Issue - Can't Login to admin panel

    - by Lobe
    I am trying to get Openfire to install on an Ubuntu virtual machine, however upon completing the web based installer, I am unable to login to the admin panel. So far I: downloaded Debian installer Installed using stock options Added database and built the structure using supplied SQL file Completed web based installer I am now trying to login using username: admin and my password, however I constantly get a wrong username/password error. There is a record generated in the MySQL database showing the admin user with an encrypted password, and changing to an unencoded password doesn't work. What is the problem here?

    Read the article

  • Tracking "To Do" Items

    - by Bill Graziano
    One of the challenges I struggle with is keeping a good "to do" list of things I need to do on the various SQL Servers I support. I have servers that I don't visit on a regular basis so my situation may be different than many of you. Though I'm sure you all have servers that you only touch every few months. (And it's usually the accounting server!) It's difficult for me to remember what changes I made and what changes I need to make. I've tried Outlook, OneNote and various other to do list managers and haven't been happy with any of them. Many are close but just don't give me what I need. As a result I've started writing my own. It's web-based so you can use it from anywhere -- including on a server. It also knows just enough about SQL Server to help structure your to do items and your notes. It isn't agent based and doesn't do any monitoring. Think OneNote or Evernote but with some "SQL Servery" stuff built in. If you'd like to try this or take a survey I'm putting together, add your email address to my mailing list.  I should be ready in a week or so.  I'm only going to use this list for notifications about this service. I'd like to find a small group of people that feel the same pain I do and maybe we can build something interesting.

    Read the article

  • System user authentication via web interface [closed]

    - by donodarazao
    Background: We have one pretty slow and expensive satellite Internet connection that is shared in a network with 5-50 users. To limit traffic, users shall pay a certain sum of money per hour. Routing and traffic accounting on user basis is done by a opensuse 10.3 server. Login is done via pppoe, and for each connection, username, bytes_sent, bytes_rcvd, start_time, end_time,etc are written into a mysql database. Now it was decided that we want to change from time-based to volume-based pricing. As the original developer who installed the system a couple of years ago isn't available, I'm trying to do the changes. Although I'm absolutely new to all this, there is some progress. However, there's one point I'm absolutely stuck. Up to now, only administrators can access connection details and billing information via a web interface. But as volume-based prices are less transparent to users than time-based prices, it is essential that users themselves can check their connections and how much they cost via the web interface. For this, we need some kind of user authentication. Actual question: How to develop such a user authentication? Every user has a linux system user account. With this user name and password, connection to the pppoe-server is made by the client machines. I thought about two possibles ways to authenticate users: First possibility: Users type username and password in a form. This is then somehow checked. We already have to possibilities to change passwords via the web interface. Here are parts of the code: Part of the Perl script the homepage is linked to: #!/usr/bin/perl use CGI; use CGI::Carp qw(fatalsToBrowser); use lib '../lib'; use own_perl_module; my @error; my $data; $query = new CGI; $username = $query->param('username') || ''; $oldpasswd = $query->param('oldpasswd') || ''; $passwd = $query->param('passwd') || ''; $passwd2 = $query->param('passwd2') || ''; own_perl_module::connect(); if ($query->param('submit')) { my $benutzer = own_perl_module::select_benutzer(username => $username) or push @error, "user not exists"; push @error, "your password?!?" unless $passwd; unless (@error) { own_perl_module::update_benutzer($benutzer->{id}, { oldpasswd => $oldpasswd, passwd => $passwd, passwd2 => $passwd2 }, error => \@error) and push @error, "Password changed."; } } Here's part of the sub update_benutzer in the own_perl_module: if ($dat-{passwd} ne '') { my $username = $dat-{username} || $select-{username}; my $system = "./chpasswd.pl '$username' '$dat-{passwd}'" . (defined($dat-{oldpasswd}) ? " '$dat-{oldpasswd}'" : undef); my $answer = $system; if ($? != 0) { chomp($answer); push @$error, $answer || "error changing password ($?)"; Here's chpasswd.pl: #!/usr/bin/perl use FileHandle; use IPC::Open3; local $username = shift; local $passwd = shift; local $oldpasswd = shift; local $chat = { 'Old Password: $' => sub { print POUT "$oldpasswd\n"; }, 'New password: $' => sub { print POUT "$passwd\n"; }, 'Re-enter new password: $' => sub { print POUT "$passwd\n"; }, '(.*)\n$' => sub { print "$1\n"; exit 1; } }; local $/ = \1; my $command; if (defined($oldpasswd)) { $command = "sudo -u '$username' /usr/bin/passwd"; } else { $command = "sudo /usr/bin/passwd '$username'"; } $pid = open3(\*POUT, \*PIN, \*PERR, $command) or die; my $buffer; LOOP: while($_ = <PERR>) { $buffer .= $_; foreach (keys(%$chat)) { if ($buffer =~ /$_/i) { $buffer = undef; &{$chat->{$_}}; } } } exit; Could this somehow be adjusted to verify users, but not changing user passwords? The second possibility I see: all pppoe connections are logged in the mysql database. If I could somehow retrieve the username (or uid) of the user connected by pppoe, this could be used to authenticate users. Users could only check their internet connections and costs when they are online (and thus paying money), but this could be tolerated. Here's a line of the script that inserts connections into the database: my $username = $ENV{PEERNAME}; I thought it would be easy to use this variable, but $username seems to be always empty in test-scripts (print $username). Any idea how to retrieve the user connected to the pppoe server? Sorry for the long question! Any help would be very much appreciated. :)

    Read the article

< Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >