Search Results

Search found 6695 results on 268 pages for 'news analysis'.

Page 224/268 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • What is the standard term for my role?

    - by sigil
    I'm doing work that involves writing code and managing developers in a "special projects" division of a large company. I'd like to define my role better and figure out if there's an industry standard term for what I do, so that it will be easier for me to research best practices and work on a career path What I do all day: A macro that connects an Excel sheet to an Access database is acting funny; I get called in to figure out what's happening and debug it. Someone needs data extracted from a bunch of files on Sharepoint. I figure out a client-side solution because I'm not authorized to do anything server-side and getting IT to do anything would take several months and need a business case. A manager wants a new data entry tool for their team. I interview the manager and team members to work out the functional requirements, then design/develop/test the application. Someone needs a VBA script to crunch some data for their presentation that's due in two hours. I drop everything I'm doing to hack out a quick script and run the analysis, without much in the way of testing. A developer has been hired to build a database for one of the teams, since I'm working on too many different things and don't have time to take this project on in the timeframe required. I direct his work and push him to meet certain deadlines, interview stakeholders to get more info that will help him figure out how to build the necessary forms, and modify the functional requirements of the database to fit in the timeframe. Someone wants to load a set of data into a GIS system and set up an ongoing refresh and reporting of this data set. I facilitate the conversation between the GIS developers and the owners of this data set, and design a demo application as proof of concept. It's kind of an "all-purpose programming and IT management" position, but it's not officially IT because the company has an actual IT department with a rigorously defined system of submitting requests, developing code, and managing projects. What I do, I guess, is more of a handyman job, where stuff falls to me because I'm the geekiest one in the room. Is there a standard term in the software world for what I do?

    Read the article

  • SCOM, Server 2008 and SQL Server 2008

    - by Jacques
    Hi there, I'm trying to setup SCOM(System Center Operations Manager 2007 (SCOM) – Platform Monitoring) on my Server 2008 machine using SQL Server 2008 running on the same machine. When I check my prerequisites I get problem on SQL and Active Directory components. (I'm running SQL server 2008 and Server 2008 with active directory not installed) Errors: 1.Microsoft SQL Server 2005 Service Pack 1 required. Details: SQL Server 2005 SP1 is the next version of SQL Server. SQL Server 2005 Enterprise Edition, is a complete data and analysis platform for large mission-critical business applications. The link provided in the resolution column is a trial version of the product and is not supported by the Microsoft SQL Server team In order to install active directory needs to be present. Details:Setup failed to verify the presence of Active Directory for this server. I've got a couple of questions I need answering, hope someone can help. Do I need to install Active Directory for SCOM to work? Can I run SCOM with an SQL 2008 Database? How do I get pass these problems?

    Read the article

  • Mod Rewrite Help - Pseudo-Subdirectories

    - by Gimpyfuzznut
    I am dealing with a frustrating problem with Joomla that is going to require some url trickery. The idea is straight-forward but after reading a bunch of guides for mod-rewrite, I still can't seem to get it work. Let's say my site is www.mysite.com. Joomla is already performing some rewriting for SEF urls so I have links like www.mysite.com/home and www.mysite.com/news and so on. I want to be able to have (4) pseudo-subdirectories like www.mysite.com/mode1/ and www.mysite.com/mode2/ and so on. These subdirectories should work as if the subdirectory isn't there, ie both www.mysite.com/mode1/home and www.mysite.com/mode2/home should pull up the same www.mysite.com/home. It should point any www.mysite.com/mode1/anypagehere to www.mysite.com/anypagehere. The reason I am asking for this is because I will be reading the url for mode1, mode2, etc, to modify the template page. There will be a landing page that will direct people to /mode1/ and /mode2/ etc and the template will change based on that. Note, that I don't want to actually pass a parameter to the url accessible by a GET or whatever because Joomla removes it (perhaps because of my current mod_rewrite settings). I've pasted the current .htaccess file. RewriteBase /joomla ##########Rewrite rules to block out some common exploits RewriteCond %{QUERY_STRING} mosConfig_[a-zA-Z_]{1,21}(=|\%3D) [OR] # Block out any script trying to base64_encode crap to send via URL RewriteCond %{QUERY_STRING} base64_encode.*\(.*\) [OR] # Block out any script that includes a <script> tag in URL RewriteCond %{QUERY_STRING} (\<|%3C).*script.*(\>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Send all blocked request to homepage with 403 Forbidden error! RewriteRule ^(.*)$ index.php [F,L] ########## Begin - Joomla! core SEF Section RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/index.php RewriteCond %{REQUEST_URI} (/|\.php|\.html|\.htm|\.feed|\.pdf|\.raw|/[^.]*)$ [NC] RewriteRule (.*) index.php #RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L] ########## End - Joomla! core SEF Section

    Read the article

  • Office365 SPF record has too many lookups

    - by Sammitch
    For some utterly ridiculous administrative reasons we've got a split domain with one mailbox on Office365 which requires us to add include:outlook.com to our SPF record. The problem with this is that that rule alone requires nine DNS lookups of the maximum of 10. Seriously, it's horrible. Just look at it: v=spf1 include:spf-a.outlook.com include:spf-b.outlook.com ip4:157.55.9.128/25 include:spfa.bigfish.com include:spfb.bigfish.com include:spfc.bigfish.com include:spf-a.hotmail.com include:_spf-ssg-b.microsoft.com include:_spf-ssg-c.microsoft.com ~all Given that we have our own large-ish mail system we need to have rules for a, mx, include:_spf1.mydomain.com, and include:_spf2.mydomain.com which puts us at 13 DNS lookups which causes PERMERRORs with strict SPF validators, and completely unreliable/unpredictable validation with non-strict/badly implemented validators. Is it possible to somehow eliminate 3 of those include: rules from the bloated outlook.com record, but still cover the servers used by O365? Edit: Commentors have mentioned that we should simply use the shorter spf.protection.outlook.com record. While that is news to me, and it is shorter, it's only one record shorter: spf.protection.outlook.com include:spf-a.outlook.com include:spf-b.outlook.com include:spf-c.outlook.com include:spf.messaging.microsoft.com include:spfa.frontbridge.com include:spfb.frontbridge.com include:spfc.frontbridge.com Edit² I suppose we can technically pare this down to: v=spf1 a mx include:_spf1.mydomain.com include:_spf2.mydomain.com include:spf-a.outlook.com include:spf-b.outlook.com include:spf-c.outlook.com include:spfa.frontbridge.com include:spfb.frontbridge.com include:spfc.frontbridge.com ~all but the potential issues I see with this are: We need to keep abreast of any changes to the parent spf.protection.outlook.com and spf.messaging.microsoft.com records. If anything is changed or [god forbid] added we would have to manually update ours to reflect that. With our actual domain name the record's length is 260 chars, which would require 2 strings for the TXT record, and I honestly don't trust that all of the DNS clients and SPF resolvers out there will properly accept a TXT record longer than 255 bytes.

    Read the article

  • Word 2007 crashes on Server 2008 R2 terminal services

    - by John Rennie
    We are finding that Word 2007 (with SP2) crashes when used on a Windows 2008 R2 terminal server. Typically it crashes when you click File/Open or File/Save, but not every time. Maybe one time in four, and just to be really confusing, on a test server in my office I can't make it crash. I have just today set up a brand new shiny 2k8 R2 terminal server with as simple a setup as possible, e.g. no anti-virus to confuse things, and we're still seeing crashes. My question is has anyone else seen this, and if so any clues on what's happening? We have a support case open with Microsoft, and the MS support engineer has conceded it's happening, but has so far been unable to find the reason. On possible factor is that all the 2k8 R2 terminal servers I've seen this on have been Hyper-V VMs (running on a 2k8 R2 host). I'm about to put in a physical 2k8 R2 terminal server at the customer where we're seeing the most crashes, in case this is relevant. More news soon. Sorry if this posting seems a bit vague, but this has just bitten us and is causing a lot of pain and sleepless nights :-( If anyone can help I'll be enormously grateful! Update: we've given up and gone back to 2008 pre-R2. Both Office 2003 and 2007 both work fine now. I think there are some problems with TS in R2. Googling doesn't find much, so I thought it was just me. It's reassuring to find that someone else has seen the same problem.

    Read the article

  • IIS 7 returns 304 instead of 200

    - by Ola Herrdahl
    I have a strange issue with IIS 7. Sometimes it seems to return a 304 instead of a 200. Here is a sample request captured with Fiddler: (Note that the file requested is not located in my browsers cache yet.) GET https://[mysite]/Content/js/jquery.form.js HTTP/1.1 Accept: */* Referer: https://[mysite]/Welcome/News Accept-Language: sv-SE User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; OfficeLiveConnector.1.4; OfficeLivePatch.1.3; .NET4.0C; .NET4.0E) Accept-Encoding: gzip, deflate Host: [mysite] Connection: Keep-Alive Cache-Control: no-cache Cookie: ... Note that there is no If-Modified-Since or If-None-Match in the request. But still the response is: HTTP/1.1 304 Not Modified Cache-Control: public Expires: Tue, 02 Mar 2010 06:26:08 GMT Last-Modified: Mon, 22 Feb 2010 21:58:44 GMT ETag: "1CAB40A337D4200" Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Mon, 01 Mar 2010 17:06:34 GMT Does anyone have a clue of what could be wrong here? I'm running IIS 7 on Windows Web Server 2008 R2.

    Read the article

  • Using %v in Apache LogFormat definition matches ServerName instead of specific vhost requested

    - by Graeme Donaldson
    We have an application which uses a DNS wildcard, i.e. *.app.example.com. We're using Apache 2.2 on Ubuntu Hardy. The relevant parts of the Apache config are as follows. In /etc/apache2/httpd.conf: LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog In /etc/apache2/sites-enabled/app.example.com: ServerName app.example.com ServerAlias *.app.example.com ... CustomLog "|/usr/sbin/vlogger -s access.log /var/log/apache2/vlogger" vlog Clients access this application using their own URL, e.g. company1.app.example.com, company2.app.example.com, etc. Previously, the %v in the LogFormat directive would match the hostname of the client request, and we'd get several subdirectories under /var/log/apache2/vlogger corresponding to the various client URLs in use. Now, %v appears to be matching the ServerName value, so we only get one log under /var/log/apache2/vlogger/app.example.com. This breaks our logfile analysis because the log file has no indication of which client the log relates to. I can fix this easily by changing the LogFormat to this: LogFormat "%{Host}i %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog This will use the HTTP Host: header to tell vlogger which subdirectory to create the logs in and everything will be fine. The only concern I have is that this has worked in the past and I can't find any indication that this has changed recently. Is anyone else using a similar config, i.e. wildcard + vlogger and using %v? Is it working fine?

    Read the article

  • Powerpoint 2010 crash on quickstyle menu

    - by Marcus Lindblom
    Windows 7 64-bit, recent install, added Office 2010. When I create some boxes and open the quick-style menu, it crashes (or stops responding, in windowese, and then it sends an error report). I've run the "Repair" from the installer, but it didn't help. There's nothing in Windows Update I've Googled and searched on Microsoft's site, but n Any other ideas? (It's not related to this ppt crash question as that concerns PPT 2007. Error in event log: Faulting application name: POWERPNT.EXE, version: 14.0.4754.1000, time stamp: 0x4b967cf2 Faulting module name: KERNELBASE.dll, version: 6.1.7600.16385, time stamp: 0x4a5bdfe0 Exception code: 0xe0000003 Fault offset: 0x000000000000aa7d Faulting process id: 0xee8 Faulting application start time: 0x01cb9dc710fd76d8 Faulting application path: C:\Program Files\Microsoft Office\Office14\POWERPNT.EXE Faulting module path: C:\Windows\system32\KERNELBASE.dll Report Id: 58766562-09ba-11e0-90d1-00215a139192 And: Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: POWERPNT.EXE P2: 14.0.4754.1000 P3: 4b967cf2 P4: KERNELBASE.dll P5: 6.1.7600.16385 P6: 4a5bdfe0 P7: e0000003 P8: 000000000000aa7d P9: P10: Attached files: C:\Users\marcusl\AppData\Local\Temp\CVR86DD.tmp.cvr C:\Users\marcusl\AppData\Local\Temp\WERC765.tmp.WERInternalMetadata.xml These files may be available here: C:\Users\marcusl\AppData\Local\Microsoft\Windows\WER\ReportArchive\AppCrash_POWERPNT.EXE_d45f313d77f7e52cc8682b2b64cc3898127c2c_1106e3ac Analysis symbol: Rechecking for solution: 0 Report Id: 58766562-09ba-11e0-90d1-00215a139192 Report Status: 1

    Read the article

  • SQL Server 2008 Unique Problem for bring DB Online...

    - by Nai
    This is my error I am facing TITLE: Microsoft.SqlServer.Smo Set offline failed for Database 'Go3D_Retailer ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Unable to open the physical file "E:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\ftrow_Go3D_catalog.ndf". Operating system error 2: "2(failed to retrieve text for this error. Reason: 15105)". Database 'Go3D_Retailer' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. ALTER DATABASE statement failed. (Microsoft SQL Server, Error: 5120) Background to this error I've been trying to move my destination logshipping database to another physical server for analysis purposes. Because I do not have active directory set up, I had to hack my process by using the same username/password for both the source and destination servers to get the process to work. Following that, I used this guy's solution to move the destination database to another server. However, this error occurs when I try to bring the database back online. I don't have an E drive on my server and I have no idea why it's trying to open a file from E drive. I have over a 100gb left on my hard disk so it's definitely not a space issue. This sounds like a bug... Any ideas? I'm running SQL Server 2008 Enterprise edition on Windows Server 2008 R2 64bit

    Read the article

  • Is it possible to shrink the size of an HP Smart Array logical drive?

    - by ewwhite
    I know extension is quite possible using the hpacucli utility, but is there an easy way to reduce the size of an existing logical drive (not array)? The controller is a P410i in a ProLiant DL360 G6 server. I'd like to reduce logicaldrive 1 from 72GB to 40GB. => ctrl all show config detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: 5001438006FD9A50 Cache Serial Number: PAAVP9VYFB8Y RAID 6 (ADG) Status: Disabled Controller Status: OK Chassis Slot: Hardware Revision: Rev C Firmware Version: 3.66 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 3 secs Surface Scan Mode: Idle Queue Depth: Automatic Monitor and Performance Delay: 60 min Elevator Sort: Enabled Degraded Performance Optimization: Disabled Inconsistency Repair Policy: Disabled Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 15 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Enabled Total Cache Size: 512 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Array: A Interface Type: SAS Unused Space: 412476 MB Status: OK Logical Drive: 1 Size: 72.0 GB Fault Tolerance: RAID 1+0 Heads: 255 Sectors Per Track: 32 Cylinders: 18504 Strip Size: 256 KB Status: OK Array Accelerator: Enabled Unique Identifier: 600508B1001C132E4BBDFAA6DAD13DA3 Disk Name: /dev/cciss/c0d0 Mount Points: /boot 196 MB, / 12.0 GB, /usr 8.0 GB, /var 4.0 GB, /tmp 2.0 GB OS Status: LOCKED Logical Drive Label: AE438D6A5001438006FD9A50BE0A Mirror Group 0: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) Mirror Group 1: physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 146 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 146 GB, OK) SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 Device Number: 250 Firmware Version: RevC WWID: 5001438006FD9A5F Vendor ID: PMCSIERA Model: SRC 8x6G

    Read the article

  • Is it possible to extract contents of a ThinApp container?

    - by rsk82
    Is there possible to extract such packages made by somebody else. Ok, so if there is no general way of extracting such archives, which can be a huge exe file or tiny exe starter with huge packed *.bin file with main files of an app that is to be run in portable way - is there a way to set an option in compilation *.ini file or other way to make such package able to be extracted. I remember I read somewhere that somebody created a tiny program (in was mentioned in vmware forums as far as I can remember, it was a crude thing coded for private use and I never managed to download it) to sit with main portable application and if such application has an open/save file dialog, it is possible to navigate to that program which virtually sits in the program files alongside main app from within the main app, and such program would scan all files that it can see and somehow is able to distiguish a real file from virtual one, and save all virtual files in a structure that is similar to the initial compilation folder from which portable app was created. I know that it is a very round-about way of doing things, but maybe the only one feasible. Nevertheless any news on this front ? Do antiviruses can somehow unpack these things ? Maybe they must buy a code or license for it from VmWare ? Edit: I found http://communities.vmware.com/thread/257433?tstart=600 and still trying to make sense out of this. Wrong, this was about moving old version of thinapp to win7.

    Read the article

  • Macbook Pro 2010 Ethernet Jumbo Frame(9k MTU) Support?

    - by Troggy
    Has anyone been able to use jumbo frame support on their 2010 Macbook Pros? This is kind of negative news here, but I am seeing many reports that this is not available anymore due to Apple's choice of network card in the new machines. I cannot set my MTU speed over 1500 on my new 2010 MBP i7, but my old early 2008 MBP (Core2) has the 9000 MTU setting for use. Everything I have is setup to use jumbo frames and I thought apple kept that feature in their "pro" lineup. It sounds like the Mac Pro still has it. Did they decide to use a chipset that doesn't support it? I am trying to pinpoint some solid chipset numbers and the feature support. Maybe they just need to update the drivers? Is there some more official information about this feature? This might seem minor, but this is really frustrating if apple removed this feature from their pro laptop line. From what I have read so far, it sounds like I am not alone in my frustrations with this. http://discussions.info.apple.com/message.jspa?messageID=12258067 http://discussions.apple.com/thread.jspa?messageID=12130158 Anyone have any experience or further knowledge about this issue ... beyond typing my question into google and giving the top 5 results as answers? :)

    Read the article

  • Embed album art in OGG through command line in linux

    - by teratomata
    I want to convert my music from flac to ogg, and currently oggenc does that perfectly except for album art. Metaflac can output album art, however there seems to be no command line tool to embed album art into ogg. MP3Tag and EasyTag are able to do it, and there is a specification for it here which calls for the image to be base64 encoded. However so far I have been unsuccessful in being able to take an image file, converting it to base64 and embedding it into an ogg file. If I take a base64 encoded image from an ogg file that already has the image embedded, I can easily embed it into another image using vorbiscomment: vorbiscomment -l withimage.ogg > textfile vorbiscomment -c textfile noimage.ogg My problem is taking something like a jpeg and converting it to base64. Currently I have: base64 --wrap=0 ./image.jpg Which gives me the image file converted to base64, using vorbiscomment and following the tagging rules, I can embed that into an ogg file like so: echo "METADATA_BLOCK_PICTURE=$(base64 --wrap=0 ./image.jpg)" > ./folder.txt vorbiscomment -c textfile noimage.ogg However this gives me an ogg whose image does not work properly. I noticed when comparing the base64 strings that all properly embedding pictures have a header line but all the base64 strings I generate are lacking this header. Further analysis of the header: od -c header.txt 0000000 \0 \0 \0 003 \0 \0 \0 \n i m a g e / j p 0000020 e g \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 0000040 \0 \0 \0 \0 \0 \0 \0 \0 035 332 0000052 Which follows the spec given above. Notice 003 corresponds to front cover and image/jpeg is the mime type. So finally, my question is, how can I base64 encode a file and generate this header along with it for embedding into an ogg file?

    Read the article

  • How to Creat custom content for nginx error 502 page, keep origin url on browser

    - by user123862
    i'm trying to get custom language and message for nginx error page but keep url on browser.. not success for eg: i go to url : xaluan.com/aaa/bbb.html on the time server down.. nginx will show error 502. with the same url but custom message as my language. test 1. I created a custom page at /usr/local/nginx/html/205.html as following config but it show on web site when error is default nginx error at domain.com/50.html ( the content of webpage not same as i created) error_page 502 /502.html; location = /502.html { root /usr/local/nginx/html; } test 2. Then i create same page at my www domain folder /home/xaluano/public_html/502.html but this keep redirect me to root domain.com/502.html the content now same as i created. but.. the url still not as i need error_page 502 /502.html; location = /502.html { root /home/xaluano/public_html; internal; } EDIT UPDATE for more detail 10/06/2012 please download my nginx config http://pastebin.com/7iLD6WQq and vhost config following: http://pastebin.com/ZZ91KiY6 == the case test.. if apache httpd service stop: #service httpd stop then open browser go to: xaluan.com/modules.php?name=News&file=article&sid=123456 I will see the 502 error with the same url on browser address == Custome error page I need the config which help when apache fail .. will show the custom message tell user wail for 1 minute for service back then refress current page with same url ( refresh I can do easy by javascript ), Nginx dosent change url so java-script can work out. any help will be great.. thank in advance

    Read the article

  • How to disable 3rd party cookies in Chrome?

    - by David Nordvall
    I have both the "stop websites from storing local data" and the "block all third party cookies without exception" settings enabled in Chrome 12 (I'm not sure what the exact names of these settings are in english as I run Chrome with swedish localization). I do however have two problems. My first problem is that when I'm visiting one of my local news paper's site (and surely other), cookies from www.facebook.com is allowed for some reason. I suspect that the reason is that I have added an exception to the www.facebook.com domain but as the setting "block all third party cookies without exception" implies, that shouldn't matter. My second problem is that if I check what cookies are stored on my computer after browsing for a while, I have tons of cookies that are not on my white list. Primarily from ad services. My expectations from enabling the above mentioned settings was that only cookies that fulfill the two folling requirements would be accepted: the cookies must be from the domain in my address bar the cookies must be from a domain on my whitelist Apparently this isn't the case. The question is, have I completely misunderstood the settings or is this a bug? And, either way, is there a way to accomplish my desired behavior?

    Read the article

  • SVN and WebSVN with different users access restriction on multiple repositories on linux

    - by user55658
    and first of all sorry for my english. I've installed an ubuntu server 10.04.1 with apache2, subversion, svn_dav and websvn. (and others services of course, like php5, mysql 5.1, etc). I've configured my svn with multiple repositories, and each one with differents groups and users, like: /var/myrepos/repo1 group: mygroup1 /var/myrepos/repo2 group: mygroup2 /var/myrepos/repo3 user: johndoe With an easy access on svn_dav, works perfectly, ie: http://myserver/svnrepo1 accesibly only for users on mygroup1 with theirs users of linux and passwords of svn. Also works for the other repos with their users and groups. But when i tried with websvn, shows all repos without take care than if user on mygroup1 can view repo2 (that's i dont want do). You can login as any user on mygroup1, mygroup2, or johndoe, and you login into all repositories. I'll try to find a solution and I'll post the news, if anyone can helpme with this I'll preciated so much!!! Thanks for all I show my files: /etc/apache2/mods-available/dav_svn.conf <Location /svnrepo1> DAV svn SVNPath /var/myrepos/repo1 AuthType Basic AuthName "Repositorio Subversion de MD" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location> <Location /websvn/> Options FollowSymLinks order allow,deny allow from all AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location>

    Read the article

  • Free or Open Solution for Storing and Charting CSV data

    - by rrrfusco
    I'm presently storing CSV files, combining them, opening them in open office, creating pivot tables and then generating charts from the spreadsheet. I've looked at OOBase, but appending csv files to base is clunky for some reason. SQLite seems like a good database solution, but I've haven't found a good charting program that connects to it with ease. Although open office (or libreoffice) maintains the references and allows you to update the information, this process is far from efficient. There are too many steps and it seems one program should handle all of these tasks. A better program would be more intuitive, allow you to simply add inserts into a database, and include an interface for standard charting settings. EDIT Simplest Automated Analysis and Chart Generation Tool? The above answer references Spotfire and Tableau, each of which has a free 14 and 30 day trial. Each program is nicely streamlined and designed. I'm looking for a program between this quality and LibreOffice. Can you recommend a better open or free desktop solution for windows?

    Read the article

  • Ubuntu 12.04 cloud edition on Amazon - Apache2 - /etc

    - by jdog
    I have setup a web server on Amazon with 3 Virtual hosts. For some reason I can't get any of the sites going on it, they all show a 404 error. /var/log/apache2/error.log shows "File does not exist: /etc/apache2/htdocs" I have checked: a2ensite all my virtual hosts actually checked softlinks in sites-enabled access rights in /var/www to 777, in case user is not www-data grep -r htdocs /etc/apache2 (returns nothing) ports.conf has NameVirtualHost directive exactly matching Virtual Hosts What else could this be? ports.conf # If you just change the port or add more ports here, you will likely also # have to change the VirtualHost statement in # /etc/apache2/sites-enabled/000-default # This is also true if you have upgraded from before 2.2.9-3 (i.e. from # Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and # README.Debian.gz NameVirtualHost 107.20.169.163:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> sites-available/www.seleconlight.com <VirtualHost 107.20.169.163:80> ServerName www.seleconlight.com DocumentRoot /var/www/www.seleconlight.com CustomLog /var/log/apache2/www.seleconlight.com-access.log combined ErrorLog /var/log/apache2/www.seleconlight.com-error.log </VirtualHost>

    Read the article

  • How to test email spam scores with amavis?

    - by CaptSaltyJack
    I'd like a way to test a spam message to see its spam scores that SpamAssassin gives it. The SA db files (bayes_toks, etc) reside in /var/lib/amavis/.spamassassin. I've been testing emails by doing this: sudo su amavis -c 'spamassassin -t msgfile' Though this yields some strange results, such as: Content analysis details: (3.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100% [score: 1.0000] -0.0 NO_RELAYS Informational: message was not relayed via SMTP 0.0 LONG_TERM_PRICE BODY: LONG_TERM_PRICE 0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100% [score: 1.0000] -0.0 NO_RECEIVED Informational: message has no Received headers 0.2 is an awfully low scores for BAYES_999! But this is the first time I've used amavis, previously I've always just used spamassassin directly as a content filter in postfix, but apparently running amavis/spamassassin is more efficient. So, with amavis in the picture, how can I run a test on a message to see its spam score breakdown? Another email I ran a test on got this result: 2.0 BAYES_80 BODY: Bayes spam probability is 80 to 95% [score: 0.8487] Doesn't make sense, that BAYES_80 can yield a higher score than BAYES_999. Help!

    Read the article

  • Outlook express 6 crashes after "synchonizing account"

    - by Ira Baxter
    [EDIT: Revised OE version to match reality] I use Outlook Express (OE) 6.00.3790.3959 to read newsgroups. I regularly scan through about 100 newsgroups. When I select a newsgroup, and then select "synchronize account", OE goes through the newsgroups visibly updating the unread message count. It gets apparantly to the end, and then comes back to check for Watched Conversations, of which I typically have a few scattered across the newsgroups. It invariably crashes with some kind of access fault. A dialog box pops up saying "Want to debug with "; I invariable say No, and the process goes away. I restart OE, select the newsgroup, and skip "Synchronize Account"; I am able to read news and everything seems just fine. This behavior has occurred since I set it up several months ago. Is the newsgroups database screwed up? Can I run something like CHKDSK or Outlooks PST repair to fix things up? Any suggestions as to what to do? The system is Windows XP 64 system installed in January 2008. I accept Windows Updates and install them on a regular basis. I use Outlook (not OE) for regular email an it behaves perfectly normally.

    Read the article

  • "postgres blocked for more than 120 seconds" - is my db still consistent?

    - by nn4l
    I am using an iscsi volume on an Open-E storage system for several virtual machines running on a XenServer host. Occasionally, when there is a very high disk I/O load on the virtual machines (and therefore also on the storage system), I got this error message on the vm consoles: [2594520.161701] INFO: task kjournald:117 blocked for more than 120 seconds. [2594520.161787] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162194] INFO: task flush-202:0:229 blocked for more than 120 seconds. [2594520.162274] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162801] INFO: task postgres:1567 blocked for more than 120 seconds. [2594520.162882] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. I understand this error message is caused by the kernel to inform that these processes haven't been run for 120 seconds, most likely because a disk access to the storage system has not yet been processed. But what is the effect on the processes. For example, will the postgres process eventually write its data when the storage system is idle again after a few minutes, so that all data is still consistent? Or will it abort the write, leaving some tables in an inconsistent state? I certainly expect that the former should be the case - if the disk access is slow, postgres (or any other affected process) should just wait as long as it takes. I can live with the application hanging for a few minutes. But if there is a chance for data corruption then any of these errors is really bad news. Please advise what to do here.

    Read the article

  • How do I get a subdomain on Xampp Apache @ localhost?

    - by jasondavis
    **UPDATE- I got it working now, I just had to change to The port number is important here. I just modified my windows HOST file @ C:\Windows\System32\drivers\etc and added this to the end of it 127.0.0.1 images.localhost 127.0.0.1 w-w-w.friendproject-.com 127.0.0.1 friendproject.-com Then I modified my httpd-vhosts.conf file on Apache under Xampp @ C:\webserver\apache\conf\extra Under the part where it shows examples for adding virtualhost I added this code below: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /htdocs/images/ ServerName images.localhost </VirtualHost> <VirtualHost *:80> DocumentRoot /htdocs/ ServerName friendproject.com/ </VirtualHost> <VirtualHost *:80> DocumentRoot /htdocs/ ServerName w-ww-.friendproject.c-om/ </VirtualHost> Now the problem is when I go to any of the newly added domains in the browser I get this error below and even worse news is I now get this error even when going to http://localhost/ which worked fine before doing this I realize I can change everything back but I really need to at least get htt-p://im-ages.localhost to work. What do I do? Access forbidden! You don't have permission to access the requested directory. There is either no index document or the directory is read-protected. If you think this is a server error, please contact the webmaster . Error 403 localhost 07/25/09 21:20:14 Apache/2.2.11 (Win32) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i PHP/5.2.9

    Read the article

  • Data recovery on a corrupted 3TB disk

    - by Mark K Cowan
    Short version I probably need software to run a deep-scan recovery (ideally on Linux) to find files on NTFS filesystem. The file data is intact, but the references are no longer present. Analogous to recovering data from a "quick-formatted" partition. Hopefully there is a smarter way available than deep-scan, one which would recover filenames and possibly paths. Long version I have a 3TB disk containing a load of backups. Windows 7 SP1 refused to detect the disk when plugged in directly via SATA, so I put it on a USB/SATA adaptor which seemed to work at first. The SATA/USB adaptor probably does not support disks over 2.2TB though. Windows first asked me if I wanted to 'format' the disk, then later showed me most of the contents but some folder were inaccessible. I stupidly decided to run a CHKDSK on my backup disk, which made the folders accessible but also left them empty. I connected this disk via SATA to my main PC (Arch Linux). I tried: testdisk ntfsundelete ntfsfix --no-action (to look for diagnostically relevant faults, disk was "OK" though) to no avail as the files references in the tables had presumably been zeroed out by CHKDSK, rather than using a typical journal'd deletion). If it is useful at all, a majority of the files that I want to recover are JPEG, Photoshop PSD, and MPEG-3/MPEG-4/AVI/MKV files. If worst comes to worst, I'll just design my own sector scanner and use some simple heuristic-driven analysis to recover raw binary blocks of data from the disk which appears to match the structures of the above file types. I am unfamiliar with the exact workings of NTFS but used to be proficient at recovering FAT32 systems with just a hex-editor, so I can provide any useful diagnostic information if you let me know how to find it! My priorities in ascending order of importance for choosing the accepted answer: Restores directory structure Recovers many filenames in addition to the file data Is free / very cheap Runs on Linux Recovers a majority of file data The last point is the most important, but the more of the higher points you match the more rep you'll probably get :)

    Read the article

  • How do I export address book from N97

    - by mplungjan
    Hi, I need to copy my numbers from my private Nokia to my office BB. I have not found a way to export my phone numbers from ovi or elsewhere. On Mac iSync stopped working with snow leopard and OVI on windows does not export. I do not mind using a windows suggestion. I lost a description on how to use the ovi backup files in another program. What I have done so far terminal: sudo open -a iSync.app - it launched but iSync said "this device is not supported by iSync" went here http://europe.nokia.com/support/product-support/isync/compatibility-and-download found a plugin (I am sure that was not there a while ago :| ) Checked software version 22.0.110 installed plugin Ran iSync which found and installed my N97 device successfully. synced. It stopped with The connection was lost while talking to the phone. http://discussions.europe.nokia.com/t5/Nseries-and-S60-Smartphones/N97-iSync-Multimedia-Transfer-Modem/m-p/568560 no news since Jan 2010. Tried to download and install http://best-vcard.en.softonic.com/symbian but the installer fails :( I simply do not understand why Nokia is giving us such a hard time. I would not have considered switching from Nokia if Mac had been better supported. It is so frustrating that they just seem not to care losing Nokia fanbois like me - especially since I am this outspoken on the net and what i say on popular forums gets indexed by google fast. I am very close to just go iPhone here. Hope someone has Nokia's ears UPDATE: I Downloaded NbuExplorer from sourceforge. It will extract everything from an OVI backup into VCF, VCS and VMG files. Very useful software and free.

    Read the article

  • Why can`t we treat SSL Certs like Pgp keys instead of trusting CAs?

    - by yarun can
    I am dumb and stupid and I do not know all the technical aspects of SSL and server/client side implications and implementations. However I understand them good enough from user point of view to use SSL and encyrption daily. I was thinking that how silly it is to trust some unknown/known CAs when it comes to our our certificates for our servers. There had been many cases of misconduct, misuse, compromises and theft of certificates/ca keys from those places. On top of those known issues we also have to pay these guys regularly. I am wondering why can not we use/treat web server certificates like we use our pgp keys? So I sign a SSL certificate and send to a central server. And then each user accessing my site checks the validity and the keys from some central server (like pgp key servers). Is this a stupid idea? If so what could be a better idea than current system of issuing valid certificates. I am looking for a better than more secure idea. Naturally this is not a solution to an existing problem, rather it will be a hypothetical solution for some future implementation of a currently messed up web of trust on the internet due to recent news about NSA and their criminal buddies around the world. thanks

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >