Search Results

Search found 4140 results on 166 pages for 'alias analysis'.

Page 133/166 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • Recommend a free temperature-monitoring utility for cores + video card, on Vista?

    - by smci
    Looking for your recommendations for a free temperature-monitoring utility, for my PC (Core 2) and graphics card for Vista. (Question reposted with the hyperlinks now I have 10 reputation). I don't want all the geeky details, I don't overclock, I don't see the need to mess with my fan speeds or motherboard settings, I just want something fairly basic to help with basic troubleshooting of intermittent overheats on video card and/or mobo: must run on Windows Vista (yes, don't laugh). ideally displays temperature when minimized to toolbar, and/or: automatically alerts me when temperature on either core or the video card exceeds a threshold ideally measures temperature of video card and system as well, not just the cores. HDD temperature is not necessary I think. logging is nice, graphs are also nice portability to Linux and Mac is nice Apparently Everest is the best paid option, but I'm not prepared to spend $40. I found the following free options, but no head-to-head at-a-glance comparison: CoreTemp (only does cores, not video card?) Open Hardware Monitor (nice graphs, displays when minimized to toolbar, no alerts) RealTemp (has alerts, works minimized, lightweight install) HWMonitor (no alerts, CNET: "[free version is] simple but effective") from CPUID CPUCool (not free: 21-day trialware, then $18) SpeedFan from Almico (too geeky, detail overload; CNET: "most users won't be able to make head or tail of the data this utility provides") Motherboard Monitor (CNET: not recommended, requires expert knowledge of your mobo, dangerous) Intel Thermal Analysis Tool (only does cores, not video card? has logging) Useful discussions I found: hardwarecanucks.com , superuser.com 1, 2 , forums.techarena.in (Update: I downloaded Real Temp 3.60 and it meets all my needs, the customizable alert temperature is great. Open Hardware Monitor seems to be the other one that mostly meets my needs, except no alerts; but it is portable. I tried SpeedFan but the interface is very cluttered, too much unnecessary detail (needs a Basic/Advanced mode and a revamp of the interface.) The answer to my underlying issue is nVidia Geforce LE 7500 video card which runs very hot.)

    Read the article

  • Control Panel as menu includes a blank item

    - by Matthew Ferreira
    When viewed as a menu attached to the Start Menu in Windows 7 Ultimate x64, the Control Panel contains a blank item. It looks like this: This item cannot be deleted or removed. I also cannot create a shortcut to it. No error message is displayed, instead simply nothing happens. I've tried using Shell Object Editor (using Run as Administrator) to find out if there is an errant entry on the Control Panel, but many entries (almost two dozen) are blank. There are several valid entries as well. I've looked through the registry and through C:\Windows, \system32, and \SysWOW64 but have had no success. I looked at this question, but I am not using Windows XP and thus have no option to use Tweak UI's Rebuild Icons function. Please note that this is no empty entry in the Control Panel when opened normally, only when attached to the Start Menu as a menu. I have compared the list of entries on the attached menu to the normal Control Panel and other than the blank entry, they are exactly the same. Nothing is missing from one or the other. I've also compared the menu and the normal view to reference images and lists of Control Panel items and have found no irregularities. Is anyone familiar with this problem or know of a solution? I've performed virus and malware scans and found nothing. I've used CCleaner with no change. Nothing with Shell Object Editor. Nothing with Registry Editor. Certainly someone here knows how to fix this. My only guess is the many blank entries visible in Shell Object Editor, but I am reluctant to delete that many items without further analysis and guidance. I appreciate your time and consideration.

    Read the article

  • Defragging Host OS of VMWare

    - by JackLocke
    Hi All, I want to ask something that has been puzzling me from last few days. I will try to explain my problem as clear as I can ... I have VMWare Workstation installed in my machine. And I use one separate 100Gb drive which stores all of my virtual machines, nothing else. Now, last week I was playing with a De-fragmentation tool called "Smart Defrag" which showed me in its analysis report that my drive where I am currently storing all of my Virtual Machines has more than 80% of fragmentation !!! Now my question is ... What will be the effect on my Guest / VM machine performance if I defrag my Host machine ... I mean this Host machine is essentially storing those virtual machines, but still dont have any direct access to what ever is stored in those machines ... so defraging the host should not cause any problem. But before proceeding, I want to hear from other people who may have met same problem. I will really appreciate any help ... BTW, I am using Windows 7 as Host and the guest machines I am using are Windows 2008 & 2003 & Ubuntu 10.04 THanks, Jack

    Read the article

  • Automated Syslog Error Solution Finder

    - by Dru
    Any automated syslog solution finding frameworks? I want my central syslog server to email a list of problems, their severity and suggested solutions. There have been several questions about centralising system logs and alternative log analysis systems, but I don't get the impression that any of them help with issue resolution. A little background: At work I am now literally doing the work of two people, and both jobs have expanded beyond their initial frameworks. It is not so bad as I have helpers, but they are little more than smart monkeys. While one of my predecessors [I have two, that is how I know I have the jobs of two people] set-up logwatch to email its results out, my monkeys don't have the skills necessary to identify unimportant data. This has caused all of them, and myself sadly, to setup email filters and ignore the whole thing until something goes "bang". It would be handy to have someone else tell them what is important, what is connected, and to suggest a few ways to resolve the issue (I could train then to research the solution first, ha!). My reading of the Splunk and Octopussy sites indicates that I still need to bring my own highly trained monkey to the party. Which I am several years from having.

    Read the article

  • concurrency::extent<N> from amp.h

    - by Daniel Moth
    Overview We saw in a previous post how index<N> represents a point in N-dimensional space and in this post we'll see how to define the N-dimensional space itself. With C++ AMP, an N-dimensional space can be specified with the template class extent<N> where you define the size of each dimension. From a look and feel perspective, you'd expect the programmatic interface of a point type and size type to be similar (even though the concepts are different). Indeed, exactly like index<N>, extent<N> is essentially a coordinate vector of N integers ordered from most- to least- significant, BUT each integer represents the size for that dimension (and hence cannot be negative). So, if you read the description of index, you won't be surprised with the below description of extent<N> There is the rank field returning the value of N you passed as the template parameter. You can construct one extent from another (via the copy constructor or the assignment operator), you can construct it by passing an integer array, or via convenience constructor overloads for 1- 2- and 3- dimension extents. Note that the parameterless constructor creates an extent of the specified rank with all bounds initialized to 0. You can access the components of the extent through the subscript operator (passing it an integer). You can perform some arithmetic operations between extent objects through operator overloading, i.e. ==, !=, +=, -=, +, -. There are operator overloads so that you can perform operations between an extent and an integer: -- (pre- and post- decrement), ++ (pre- and post- increment), %=, *=, /=, +=, –= and, finally, there are additional overloads for plus and minus (+,-) between extent<N> and index<N> objects, returning a new extent object as the result. In addition to the usual suspects, extent offers a contains function that tests if an index is within the bounds of the extent (assuming an origin of zero). It also has a size function that returns the total linear size of this extent<N> in units of elements. Example code extent<2> e(3, 4); _ASSERT(e.rank == 2); _ASSERT(e.size() == 3 * 4); e += 3; e[1] += 6; e = e + index<2>(3,-4); _ASSERT(e == extent<2>(9, 9)); _ASSERT( e.contains(index<2>(8, 8))); _ASSERT(!e.contains(index<2>(8, 9))); grid<N> Our upcoming pre-release bits also have a similar type to extent, grid<N>. The way you create a grid is by passing it an extent, e.g. extent<3> e(4,2,6); grid<3> g(e); I am not going to dive deeper into grid, suffice for now to think of grid<N> simply as an alias for the extent<N> object, that you create when you encounter a function that expects a grid object instead of an extent object. Usage The extent class on its own simply defines the size of the N-dimensional space. We'll see in future posts that when you create containers (arrays) and wrappers (array_views) for your data, it is an extent<N> object that you'll need to use to create those (and use an index<N> object to index into them). We'll also see that it is a grid<N> object that you pass to the new parallel_for_each function that I'll cover in the next post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Value of Itanium or Sparc over x86_64 for Oracle Deployment

    - by Antitribu
    We are looking at a new environment to run our Oracle Database running on SUSE (potentially migrating to RedHat). Our database is approximately 100GB and performs adequately on our current hardware (x86_64) with approximately 6GB of ram allocated to it. We are growing quickly however and will require more performance shortly. Given the cost of Oracle licenses we would like to maximize the value from each license by choosing the most appropriate CPU to run the software on. The questions are: Are there substantial benefits to looking at Itanium or Sparc hardware, are there any drawbacks? Is there a point where one starts to scale out better? What are the long term support options for Itanium? Given the dominance of x86 would it be safer long term to stick with x86? On average what would be the performance benefit of implementing an Oracle database on Itanium or Sparc over x86_64? Is this an issue at all or will other factors (IO/RAM) cap out first? If anyone can point me towards some solid documentation on comparisons between the platforms that provides good case analysis of when to choose which I'm more than happy to accept that as an answer. Edit:- Added Sparc as an Option as it was previously not considered however with the recent Oracle Sun aquisition seems very relevant.

    Read the article

  • Spreadsheet application that can handle big data OS X

    - by Peter
    I've been working with Excel for quite a while for some statistical analysis that I do regularly. The size of the data that I'm working with has gotten much larger as of late, however. The layout of the databases in question is quite simple, usually just three rows which includes a UNIX timestamp, and EST value, a proprietary numeric value and finally an average of the rows that have a timestamp +/- 1000 that row's timestamp (little AVERAGEIFS() formula). That formula and the EST conversion are the only formulas in the sheet. I'm beginning to work with files with 500,000+ rows. Running the average formula down the entire row takes forever. The end result is the production of print-worthy graphs. I'm looking for either a UNIX CL utility or separate spreadsheet/database application that can handle this amount of data without melting my CPU or making me wait an hour. Is there anything out there? TL;DR: Simple excel sheet with over half a million rows is getting too slow to work with. OS X alternatives?

    Read the article

  • Missing drive space in Server 2003

    - by Tim Brigham
    I have two drives used for SQL backups which for the last week have been acting strange - the free space indicated by windows is far off from what windirstat, etc indicates. There should only be about 60 GB of drive space used and there is about 160. This would match the utilization if the two last backup files were still residing on disk. SQL server is 2000, OS Server 2003 x64. Running on a VMware 5.0 cluster. OSSEC and McAfee for this system shows clean. My current plan is to temporarily attach one of these drives this drive to another VM for analysis. Is there anything more I should be looking at? There were a lot of pages on the net when I was looking for documentation on this issue but I haven't found this case described. EDIT: Unfortunately even a full reboot did not clear this behavior. I also used process explorer to look for open file handles. No dice.

    Read the article

  • Automated Syslog Error Solution Finder

    - by Dru
    Any automated syslog solution finding frameworks? I want my central syslog server to email a list of problems, their severity and suggested solutions. There have been several questions about centralising system logs and alternative log analysis systems, but I don't get the impression that any of them help with issue resolution. A little background: At work I am now literally doing the work of two people, and both jobs have expanded beyond their initial frameworks. It is not so bad as I have helpers, but they are little more than smart monkeys. While one of my predecessors [I have two, that is how I know I have the jobs of two people] set-up logwatch to email its results out, my monkeys don't have the skills necessary to identify unimportant data. This has caused all of them, and myself sadly, to setup email filters and ignore the whole thing until something goes "bang". It would be handy to have someone else tell them what is important, what is connected, and to suggest a few ways to resolve the issue (I could train then to research the solution first, ha!). My reading of the Splunk and Octopussy sites indicates that I still need to bring my own highly trained monkey to the party. Which I am several years from having.

    Read the article

  • To what extent is size a factor in SSD performance?

    - by artif
    To what extent is the size of an SSD a factor in its performance? In my mind, correct me if I'm wrong, a bigger SSD should be, everything else being equal, faster than a smaller one. A bigger SSD would have more erase blocks and thus more leeway for the FTL (flash translation layer) to do garbage collection optimization. Also there would be more time before TRIM became necessary. I see on Wikipedia that it remarks that "The performance of the SSD can scale with the number of parallel NAND flash chips used in the device" so it seems throughput also increases significantly. Also many SSDs contain internal caches of some sort and presumably those caches are larger for correspondingly large SSDs. But supposing this effect exists, I would like a quantitative analysis. Does throughput increase linearly? How much is garbage collection impacted, if at all? Does latency stay the same? And so on. Would the performance of a 8 GB SSD be significantly different from, for example, an 80 GB SSD assuming both used high quality chips, controllers, etc? Are there any resources (webpages, research papers, presentations, books, etc) that discuss correlations between SSD performance (4 KB random write speed, latency, maximum sequential throughput, etc) and size? I realize this does not really sound like a programming question but it is relevant for what I'm working on (using flash for caching hard drive data) which does involve programming. If there is a better place to ask this question, eg a more hardware oriented site, what would that be? Something like the equivalent of stack overflow (or perhaps a forum) for in-depth questions on hardware interfaces, internals, etc would be appreciated.

    Read the article

  • torrent downloads not showing on Squid log

    - by noobroot
    hello, i have just a few months working as sysadmin, hence i still have lots to learn, first thing id like to do is as follows: We have an OpenBSD 4.5 box acting like firewall,dns,cache etc, the box has 2 network cards, one conected directly to the internet and the other to our switch, i used to work with sarg for the log analysis but then changed to the much faster free-sa. I use a daily free-sa report to check the bandwidth usage and report our top 5 bandwidth consumers (3 days a week being #1 and you will be buying the pizzas :D, we are a small company ~20 so we are very familiar). this was working really good until recently, one of us required to download some stuff via torrent (~3GB) and since the pizza rule is active for non-work related downloads, he told me (verified) that his download was indeed work related so i would dismiss that 3GB off his quota, but to my surprise the log didnt showed that 3GB, since his ip consumption was only around 290MB. More recently, since the FIFA world cup started, we know that some of the employees are watching the match's streaming, we know it and we dont care about it since, like already stated, we are a small company so we dont have restrictive policies, we all can chat, watch youtube, download anything we want BUT we are only allowed 300MB a day otherwise you'll get in the top5-pizza-board, anyway, that streaming consumption is also not showing in the free-sa reports. So my question is, why is these data being excluded from the reports? im thinking that the free-sa reports list only certain types of things but im also thinking if are the squid logs the ones that are not erm... logging these conections. Any help, guide, advice or clarification is appreciated.

    Read the article

  • Licensing SQL Server 2012 Reporting Services w/ SharePoint 2010

    - by Evan M.
    Here's my situation: I have 1 VM that is running SharePoint 2010 SP1. I have a different physical server that is running SQL Server 2008 R2 that hosts all the configuration and content database for SharePoint. Now, we want to start providing BI capabilities to our users with SharePoint and SQL Server. With it's new features, 2012 is the obvious way to go. To support this, I'm looking to build a new VM that will have SQL Server 2012 installed w/ Analysis services and SSIS, which will be the platform that gets our data from our Oracle databases, puts it in a warehouse hosted by the SQL 2012 instance, and is put into cubes. What's getting me about the platform is licensing for Reporting Services and PowerPivot. My plan was to install SSRS and PowerPivot on the current SharePoint server. But my understanding of the licensing means that instead of the new SQL server being licensed, I'd have to license both new server, and the SharePoint server. Conversely, I could install SharePoint onto the SQL server, and only have to get a second SP license, but then I'd have the added complexity of deploying a separated application server, and combines my data and application servers. Is my licensing understanding correct, or can I have SSRS and PowerPivot installed separately without incurring additional licensing costs?

    Read the article

  • Tables in the SQL Server "master" database, will they cause problems?

    - by pepoluan
    Folks, please be kind on me... I'm just an 'accidental' DBA due to our DBA resigned, so I'm totally a newbie in DBA... You see, I have this application, "ESET Remote Administration Server" (ERAS) that stores its logs and analysis on (originally) a local Access database. The decision was to migrate its database to a SQL Server 2008 R2 machine. ESET (the maker of the software) helpfully provided tools to perform such migration; unfortunately, being the DBA neophyte that I am, I didn't realize that I have to first create my own database (on the SQL Server side) and assign that database as the 'default' database for ERAS' ODBC connection. Now, the migration tool had successfully created a whole bunch of tables inside the "master" database. My questions: Should I leave things be as it is, or should I re-migrate the ERAS database to a different database? If you suggest me perform a re-migration, my plan is to (1) create a new instance, (2) create a new database within the new instance, (3) create a new ODBC System DSN on the ERAS server pointing to the new DB in step 2, (4) use ESET's migration tool to migrate from the current DSN to the new DSN. Do you think I missed a step there? Thanks beforehand for any guidance.

    Read the article

  • Setup ejabberd with SQL Server 2008

    - by wonster
    Here's what I have got so far. Windows 2008 Server 64 bit. Installed the latest version of ejabberd, ejabberd-2.1.8-windows-installer.exe. The windows service starts up fine but seems ineffective. However, using the start & stop scripts work. I am able to login to the admin page which so far doesn't seem that versatile. Opened up ports 5222, 5226 and 5280 for my workstation to talk to the server. I've got Spark and Jabbear Windows clients to register, login and instant message with multiple accounts using the server. After confirming that I've got the very basics working, I've decided to make use of SQL Server 2008 as the database. Reason? Mainly, I am very comfortable with SQL Server. I can deal with redundancy, failover, data analysis easily. Not sure if ejabberd's built in DB provides all that. Following the instructions from ejabberd's documentation, I setup a system DSN that points to another physical database. The DSN checks out fine. (Tried both Named Pipes and TCP/IP) Modified ejabberd.cfg. Commented line %%{auth_method, internal} and uncommented line {auth_method, odbc} Uncommented and modified {odbc_server, "DSN=ejabberd;UID=somelogin;PWD=somepassword"}. After making these changes, I restarted. No errors are found in the log files. The jabber clients are no longer able to register new accounts. I'm not sure where to look for errors besides the /logs/ folder as I'm new to all this. I am basically stuck here on step 5. Has anyone got this setup to work recently? Some of the posts I've found around are years old and of no help. I can't be the only one setting up ejabberd with MS SQL. Any help would be appreciated!

    Read the article

  • Remote offscreen rendering

    - by redmoskito
    My research lab recently added a server that has a beefy NVIDIA graphics card, which we would like to use to do scientific computations. Since it isn't a workstation, we'll have to run our jobs remotely, over an ssh connection. Most of our applications require doing opengl rendering to an offscreen buffer, then doing image analysis on the result in CUDA. My initial investigation suggests that X11 forwarding is a bad idea, because opengl rendering will occur on the client machine (or rather the X11 server--what a confusing naming convention!) and will suffer network bottlenecks when sending our massive textures. We will never need to display the output, so it seems like X11 forwarding shouldn't be necessary, but Opengl needs the $DISPLAY to be set to something valid or our applications won't run. I'm sure render farms exist that do this, but how is it accomplished? I think this is probably a simple X11 configuration issue, but I'm too unfamiliar with it to know where to start. We're running Ubuntu server 10.04, with no gdm, gnome, etc installed. However, xserver-xorg package is installed.

    Read the article

  • Cannot connect to MySQL on RDS (Amazon Web Services) from my laptop

    - by Bruno Reis
    I'm having some trouble connecting to a MySQL 5.1 server on an RDS instance on AWS from my laptop. The detailed description of the problem is here: https://forums.aws.amazon.com/thread.jspa?messageID=323397 In short: I have 2 MySQL servers, both with the same db configuration and firewall (security group) configuration. One of them works fine: I can connect to it from my EC2 instances (ie, from inside the AWS cloud) and from my laptop. The other one doesn't: I can connect from my EC2 instances but not from my laptop. The symptom: a connection attempt from my laptop just hangs, and then times out, as if there was a firewall blocking me (ie, silently dropping my SYN packets). I must say that everything has been working fine for a very long time, and this problem began suddenly, 3 days ago, without any modifications to DB parameters or the security groups. My current analysis of the situation: The firewall (ie, security group) cannot be the problem: both MySQL servers share the same firewall configuration -- I can connect to one of them but not to the other. Later on, I even added a rule to allow inbound connections from 0.0.0.0/0 (ie, I turned off the firewall), and nothing. Oh, I also created a new, fresh security group and changed this instance's SG to the new one (to which I first added my ip address, and then 0.0.0.0/0) but still nothing. The credentials cannot be the problem: I use the same from my laptop and from my EC2 instances -- and the user (which is what Amazon calls master user), in the database, has a host of '%'. MySQL is not blocking my IP due to, say, too many failed connection attemps: I've FLUSH HOSTS on the database, and also I tried to connect using many different source IP addresses, even from all around the world through a VPN proxy service. What could I be missing? I'm asking here because it's been about 36 hours since I've posted on AWS forums but got no answer at all over there... someone here might have a solution! Any input is really appreciated, I'm out of ideas. Thanks!

    Read the article

  • How to continue an HTTrack mirroring session from the command line?

    - by isme
    I want to drive my mirroring project using the Command Prompt instead of the WinHTTrack interface so that I can script and schedule the mirroring session more easily. The output of httrack --help gives a simple command for continuing an interrupted mirroring session: example: httrack --continue continues a mirror in the current folder When I try httrack --continue in my HTTrack project folder, all I get is output like this: Example: -%F "<!-- Mirrored from %s by HTTrack Website Copier/3.x [XR&CO'2010], %s -->" * Option %F needs to be followed by a blank space, and a footer string With each parameter on a new line for readability, the first line of my doit.log file looks like this: -qiC1%P0s0b0u1j0%s%u0N0%I0p1DaK0c1T30H0%kf2E1800A25000%c0.1%f#f -F "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)" -%F "" -%l "en, en, *" http://saa.gov.uk/search.php?SEARCHED=1&SEARCH_TABLE=council_tax&SEARCH_TERM=City+of+Edinburgh&DISPLAY_COUNT=100 -O1 "C:\\Users\\Iain\\Projects\\Council Tax Analysis\\Code\\HTTrack\\Council Tax Valuation List" -* \ +*search.php?SEARCHED=1* -*DISPLAY_MODE=FULL* The parameter %F "" should tell HTTrack to use an empty footer. I used the WinHTTrack interface to create the project and start the mirroring session. I can interrupt and continue the mirroring session using the interface. The HTML files saved by WinHTTrack have no footer.

    Read the article

  • Is there any way to shut up my ATI HD 5770?

    - by slpsys
    So to preface, I basically built Jeff's machine; I already had some of the components, including (scarily enough) the exact same case1. I've been buying bits and pieces over the past few months, which coincided perfectly with his recent post about three monitors, though not being a gamer outright, I opted for the second-from-the-bottom option. After finally plopping all the pieces lovingly into the case this evening, I turn it on...and it sounds like four professional grade hair-driers. Some quick regression analysis determined that with the video card out, the running machine sounded no louder than our house's vents. Basically, my last desktop build included a $45-at-the-time graphics card, and it's been Macbook Pros and workstations since then, so I have zero idea whether I'll just be able to tune the fan speed later on. Will I be able to get this thing to quiet down every time I'm not playing Modern Warfare 2 at maximum framerate, or should I just send this thing back now, and get the quietest card in my pricerange? 1 One thing of note is that I do not have noise-absorbing foam in the case, as is pictured in the article. I'm only mentioning that because I suspect it could drop the overall output a few decibels, but obviously not that many.

    Read the article

  • Euro character messed up during FTP transfer

    - by djechelon
    My customer is using a very outdated ecommerce management system on my hosting service. For that product, no support is being provided anymore by the vendor. Brief explanation: the shop website, that claims to run under LAMP stack, is built by an old Visual Basic Windows application running on MS Access. The user constructs the shop, defines the HTML template, adds products and categories, etc. Then the VB exe builds the PHP pages (one for each template page) and the SQL script to run on MySQL. It also uploads everything via FTP and runs the installation/upgrade script on its own. The problem Browsing the website, many products' descriptions are cut before the euro sign. For example, what was supposed to be "Product price €1000" becomes "Product price" The analysis MySQL contains a cutted description until the € sign, so it's not PHP fault The Access databases contain full description with € sign, so it's not fault of the webmaster writing bad description or eDisplay cutting them The SQL that will run once the site gets uploaded, stored on my local machine before upload, contains the € sign The same script, after being FTPed by eDisplay and opened with nano from SSH, shows the € sign messed up like this: ^À vsftpd log reports (obfuscated for privacy) Sat Dec 15 11:16:57 2012 22 xxx.xxx.128.13 1112727 /srv/www/domains/xxxxxx.it/htdocs/db.sql b _ i r xxxxxxx ftp 0 * c which seems to be a binary transfer (and also a huge security vulnerability because you can download the whole database from unauthenticated HTTP) The eDisplay internal FTP client provides no option for ascii/binary transfer modes [Add] Trying to manually upload the SQL file via SFTP shows messing up euro [Add2] Trying to manually upload using Xftp client with explicit ASCII mode doesn't fix too It looks like the file gets uploaded as binary. Perhaps on the customer's previous host it all worked fine because that was a Windows host. The server It's an Azure virtual machine running openSUSE 12.2 with both vsftpd and openSSH The question Without asking the customer to manually upload files using FileZilla or replacing € with &euro;, because he refuses, what can I do on server side to prevent vsftpd to screw up euro sign?

    Read the article

  • Audit file removal (auditctl)

    - by user1513039
    For some reason, some script or program is removing a pid file for the service on the linux server (centos5.4 / 2.6.18-308.4.1.el5xen). I suspect a faulty cron script, but manual investigation did not lead me to it. And i still want to track it down. Have been using auditctl rule: auditctl -w /var/run/some_service.pid -p w Which helped me to see something, but not quite exactly what i wanted: type=PATH msg=audit(11/12/2013 09:07:43.199:432577) : item=1 name=/var/run/some_service.pid inode=12419227 dev=fd:00 mode=file,644 ouid=root ogid=root rdev=00:00 type=SYSCALL msg=audit(11/12/2013 09:07:43.199:432577) : arch=x86_64 syscall=unlink success=yes exit=0 a0=7fff7dd46dd0 a1=1 a2=2 a3=127feb90 items=2 ppid=3454 pid=6227 auid=root uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts0 ses=38138 comm=rm exe=/bin/rm key=(null) Problem here is that i see ppid of the script that removed the file, but at the analysis time the (p)pids are already invalid as probably scripts/programs have been shutdown. Imagine a cron script deleting the file. So i need some way to expand/add audit rule(s) to be able to trace the parents of the /bin/rm at the time of removal. I have been thinking to add some rule to monitor all process creation, something like: auditctl -a task,always But this happen to be very resource intensive. So i need help or advice how to combine these rules, or how to expand any of the rules to help track the script/program. Thanks.

    Read the article

  • Partition Magic 8 made TrueCrypt partition invisible

    - by gmancoda
    Partition Magic 8 took a dump on my TrueCrypt partition... and I let it happen! And now I am left with cleaning up the mess. In short, my encrypted partition is now invisible. TestDisk analysis says of the disk containing the encrypted partition: "Space conflict between the following two partitions". From the googling and searching on various sites, I have learned the following: Hex editing is beyond me. Partition recovery tools are useless. I am not ready to drop the big bucks for professional help. ... that I should have kept an external backup of the Volume header. Now, to get back the volume header, I am planning on recreating the exact same partitions on a new disk of the exact same model, and then encrypting it with the exact same password/keyfiles, and then exporting its volume header to a file. Finally, I hope to be able to restore this volume header on to my damaged drive. Before I undertake this plan, I would like to know if anyone else out there has tried it and, if so, how successful they were. All other suggestions and tips and welcome!! Thanks.

    Read the article

  • Hibernate Criteria: Perform JOIN in Subquery/DetachedCriteria

    - by Gilean
    I'm running into an issue with adding JOIN's to a subquery using DetachedCriteria. The code looks roughly like this: Criteria criteria = createCacheableCriteria(ProductLine.class, "productLine"); criteria.add(Expression.eq("productLine.active", "Y")); DetachedCriteria subCriteria = DetachedCriteria.forClass(Models.class, "model"); subCriteria.setProjection(Projections.rowCount()); subCriteria.createAlias("model.language", "modelLang"); criteria.add(Expression.eq("modelLang.language_code", "EN")); subCriteria.add(Restrictions.eqProperty("model.productLine.id","productLine.id")); criteria.add(Subqueries.lt(0, subCriteria)); But the logged SQL does not contain the JOIN in the subquery, but does include the alias which is throwing an error SELECT * FROM PRODUCT_LINES this_ WHERE this_.ACTIVE=? AND ? < (SELECT COUNT(*) AS y0_ FROM MODELS this0__ WHERE modelLang3_.LANGUAGE ='EN' AND this0__.PRODUCT_LINE_ID =this_.ID ) How can I add the joins to the DetachedCriteria? Hibernate version: 3.2.6.ga Hibernate core: 3.3.2.GA Hibernate annotations: 3.4.0.GA Hibernate commons-annotations: 3.3.0.ga Hibernate entitymanager: 3.4.0.GA Hibernate validator: 3.1.0.GA

    Read the article

  • Named query not known error trying to call a stored proc using Fluent NHibernate

    - by Hamman359
    I'm working on setting up NHibernate for a project and I have a few queries that, due to their complexity, we will be leaving as stored procedures. I'd like to be able to use NHibernate to call the sprocs, but have run into an error I can't figure out. Since I'm using Fluent NHibernate I'm using mixed mode mapping as recommended here. However, when I run the app I get a "Named query not known: AccountsGetSingle" exception and I can't figure out why. I think I might have a problem with my HBM mapping since I'm not very familiar with using them but I'm not sure. My NHibernate configuration code is: private ISessionFactory CreateSessionFactory() { return Fluently.Configure() .Database(MsSqlConfiguration.MsSql2005 .ConnectionString((conn => conn.FromConnectionStringWithKey("CIDB"))) .ShowSql()) .Mappings(m => { m.HbmMappings.AddFromAssemblyOf<Account>(); m.FluentMappings.AddFromAssemblyOf<Account>(); }) .BuildSessionFactory(); } My hbm.xml file is: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <sql-query name="AccountsGetSingle"> <return alias="Account" class="Core, Account"></return> exec AccountsGetSingle </sql-query> </hibernate-mapping> And the code where I am calling the sproc looks like this: public Account Get() { return _conversation.Session .GetNamedQuery("AccountsGetSingle") .UniqueResult<Account>(); } Any thoughts or ideas would be appreciated. Thanks.

    Read the article

  • Nhibernate exception - No persister for

    - by Muhammad Akhtar
    I am getting exception when calling Stored Procedure using Nhibernate and here is the exception No persister for: ReleaseDAL.ProgressBars, ReleaseDAL, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null here is class file public class ProgressBars { public ProgressBars() { } private Int32 _Tot; private Int32 _subtot; public virtual Int32 Tot {get { return _Tot; } set { _Tot = value; } } public virtual Int32 subtot { get { return _subtot; } set { _subtot = value; }} } here is my mapping file <hibernate-mapping xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="urn:nhibernate-mapping-2.2" assembly="ReleaseDAL" namespace="ReleaseDAL" > <sql-query name="ps_getProgressBarData1" > <return alias="ProgressBars" class="ProgressBars"> <return-property name="Tot" column="Tot"/> <return-property name="subtot" column="subtot"/> </return> exec ps_getProgressBarData1 </sql-query> </hibernate-mapping> Calling Code public List<ProgressBars> getProgressBarData1(Guid ReleaseId) { ISession session = NHibernateHelper.GetCurrentSession(); List< ProgressBars> progressBar = (List<ProgressBars>)session.GetNamedQuery("ps_getProgressBarData1").List<ProgressBars>(); NHibernateHelper.CloseSession(); return progressBar; } I am introuble due to this exception, I did lot Google but no success, Please give idea to solve this problem. Thanks.........

    Read the article

  • Apache: VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not sup

    - by user248959
    Hi, when i add the line below to /etc/apache2/apache2.conf I get the error belower when i restart apache: Include /usr/share/doc/apache2.2-common/examples/apache2/extra/httpd-vhosts.conf [Mon Jun 14 12:16:47 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Mon Jun 14 12:16:47 2010] [warn] NameVirtualHost *:80 has no VirtualHosts This is my httpd-vhosts.conf file: # # Use name-based virtual hosting. # NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. <VirtualHost *:80> ServerName tirengarfio.com DocumentRoot /var/www/rs3 <Directory /var/www/rs3> AllowOverride All Options MultiViews Indexes SymLinksIfOwnerMatch Allow from All </Directory> Alias /sf /var/www/rs3/lib/vendor/symfony/data/web/sf <Directory "/var/www/rs3/lib/vendor/symfony/data/web/sf"> AllowOverride All Allow from All </Directory> </VirtualHost> Any idea? Regards Javi

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >