Search Results

Search found 5124 results on 205 pages for 'max methot'.

Page 38/205 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • How do large blobs affect SQL delete performance, and how can I mitigate the impact?

    - by Max Pollack
    I'm currently experiencing a strange issue that my understanding of SQL Server doesn't quite mesh with. We use SQL as our file storage for our internal storage service, and our database has about half a million rows in it. Most of the files (86%) are 1mb or under, but even on fresh copies of our database where we simply populate the table with data for the purposes of a test, it appears that rows with large amounts of data stored in a BLOB frequently cause timeouts when our SQL Server is under load. My understanding of how SQL Server deletes rows is that it's a garbage collection process, i.e. the row is marked as a ghost and the row is later deleted by the ghost cleanup process after the changes are copied to the transaction log. This suggests to me that regardless of the size of the data in the blob, row deletion should be close to instantaneous. However when deleting these rows we are definitely experiencing large numbers of timeouts and astoundingly low performance. In our test data set, its files over 30mb that cause this issue. This is an edge case, we don't frequently encounter these, and even though we're looking into SQL filestream as a solution to some of our problems, we're trying to narrow down where these issues are originating from. We ARE performing our deletes inside of a transaction. We're also performing updates to metadata such as file size stats, but these exist in a separate table away from the file data itself. Hierarchy data is stored in the table that contains the file information. Really, in the end it's not so much what we're doing around the deletes that matters, we just can't find any references to low delete performance on rows that contain a large amount of data in a BLOB. We are trying to determine if this is even an avenue worth exploring, or if it has to be one of our processes around the delete that's causing the issue. Are there any situations in which this could occur? Is it common for a database server to come to the point of complete timeouts when many of these deletes are occurring simultaneously? Is there a way to combat this issue if it exists? (cross-posted from StackOverflow )

    Read the article

  • MS Dynamics CRM users disappear

    - by Max Kosyakov
    Recently we came across quite a weird issue. The administrators say that once in a while they notice that user accounts in MS Dynamics CRM are lost . When a new user is added to the system, the administrators add him/her to the Active Directory first. Then, they go to Dynamics CRM interface, then to system configuration -> administration -> users and add the new user to the CRM, add roles to this user, grant them relevant permissions. Then the user is able to use a custom application, which connects to the Dynamics CRM via WCF. After a while (few weeks or months) the user is unable to use the custom application because Dynamics CRM cannot authorise this user. When administrators open the Dynamics CRM user management interface (configuration -> administration -> users ) and browse through the list of CRM users they cannot find the user in the list. When they try to add the user to Dynamics CRM back, the CRM fails with the error message "User already exists". Moreover, the user still exists in the Active Directory. The admins are very sure the user had been added to the CRM before he/she started to work. The only fact the the user was able to use the custom application normally says that the user had been indeed registered in the CRM. How come the user is not listed in the CRM user management interface at all? Have anyone faced any issues like that? Seen or heard of disappearing CRM users somewhere? Any help is appreciated. Where can one start digging?

    Read the article

  • show php error message on IIS 7

    - by Max
    I am using IIS as a webserver on my development machine for PHP webdevelopment. Or at least, I am trying to. When there is a syntax error in a PHP script and I open that file in my webbrowser, I just get an 503 "internal server error" and the default IIS error page for this error. Some browsers dont open that file at all, possibly because of the 503 HTTP Response Header. I would like IIS to act in that case just like the apache webserver: display the PHP file with the error anyway, so that the error message gets printed out. How can this be done? EDIT: PHP settings: display_errors is on and error_reporting is set to E_ALL

    Read the article

  • lighttpd silently stops logging

    - by Max Cantor
    I'm on a Slicehost 256MB VPS with Ubuntu 9.04 (Jaunty). lighttpd is the only web server process running; it listens on port 80. My lighttpd.conf can be found here. I'm using Ubuntu's default logrotate setup for lighty. At seemingly random times, lighttpd will stop logging. It is not correlated with log rotation--that is, the errors do not occur when logrotate kicks in. What happens is, I will verify that the server is serving files by hitting a URL with my browser, and I will verify that it is not logging by checking access.log and seeing that the GET request I just made is not there. Using init.d to restart the process starts logging again, without truncating or rotating the log file. That is, new requests will be logged at the end of the existing access.log file. There are no cron jobs running on this box. Any ideas?

    Read the article

  • Improving performance by using an additional static file server

    - by Max
    Hello there, I´m planning for a large website that includes many static assets (js, css, images and thumbnails) in the generated pages. That website will use TYPO3 as CMS (is is a customer requirement). I guess I could seriously improve performance / page load times by using a two server setup. One server where the main application (PHP) runs and another one where the static files sit being served by a trimmed down version of apache or something like lighthttpd. Including e. g. js or css files from the file server is of course no big deal. Just use an absolute url http://static.example.com/js/main.js and be done with it. But: that website will have pages with MANY thumbnails of e. g. product images on it. So I see two problems when the main application tries to create a thumbnail of some image: the original image like products/some.jpg is uploaded on the static file server and therefore not on the same server as the PHP application which tries to create the thumbnail. TYPO3 writes created thumbnails to a temp directory which is expected to be on the same server. Therefore, hundreds of thumbnails will be written and served from that temp directory which is on the same server as the main application - the static file server is in that case basically useless, all thumbnails will be requested from the server of the main application. So, my question is: how to overcome this shortcomings? Is it possible to "symlink" some directories to another server? So, for example, if PHP tries to open the original products image for thumbnail creation with imagecreate("products/some.jpg") the products folder actually "points" to the products folder on the static image server? I know something like this can be done with .htaccess but is it possible on file system level?

    Read the article

  • Unable to connect to remote MS SQL Server 2008 Express SP3 instance by name

    - by Max
    I am trying to connect to a remote MS SQL Server 2008 SP3 x86 Instance using it's name. At the first glance all seems to work well (e.g. it is possible to connect to the server locally and succesfully telnet it's port remotely), but there is a thing I can't understand... This line should connect us to the default instance of remote SQL Server: osql -S ServerIP -d MyDatabase /U sa -P MyPassword and it does the trick, however the next one: osql -S ServerIP\MyInstance -d MyDatabase /U sa -P MyPassword ends up with the following error: [SQL Native Client]SQL Network Interfaces: Error Locating Server/Instance Specified [xFFFFFFFF]. [SQL Native Client]Login timeout expired [SQL Native Client]An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. The only instance running on the server is MyInstance, which is (I guess) the default one. Could you please put some time in explaining the issue.

    Read the article

  • how can i get list of created and deleted files on the server?

    - by max
    i have a image sharing website , users log and upload image last night i've lost about 30 newly-consecutive uploaded images ... i mean they have been uploaded ... apparently ... they are in the database but the actual image on the server is gone ! error log doesn't show anything ... so i thought my best option is to check list of created and deleted files ... if there is any ! is there a log file for created and deleted files on the server ? i'm using directadmin

    Read the article

  • Windows XP freezes completely

    - by Max
    Lately my Win XP SP3 started to make problems. From time to time it freezes completely. Which means that the system does not react to mouse and keyboard. Keyboard led indicators also do not react to CAPS-, SCROLL-, NUM- LOCK keys. The problem is that I don't understand what causes this behavior and it seems to happen randomly. System event log also does not contain any clues. I'm thinking this could be some driver/hardware problem, but I don't know which. Are there any tools that would help me figure out the cause of this problem? Does anybody have any clue how can I fix this?

    Read the article

  • Simplest way to host a domain name just so i can forward it to a wordpress blog?

    - by Max Williams
    Hi all. I have a domain name, and a wordpress account. The domain name isn't hosted anywhere at the moment. I just want to set it up so that my wordpress blog uses the domain. According to the wordpress help page on this issue, i have to do the following: Update your domain’s name servers to the following. Make sure to remove any existing name servers that are already there. NS1.WORDPRESS.COM NS2.WORDPRESS.COM NS3.WORDPRESS.COM So, this is the only thing that i need a server for - if i even need a server at all? Is there a really simple way i can do this for free? Or for very cheap? I'm a bit ignorant about this stuff: i do rails coding but don't get involved in the really technical stuff to do with servers and dns and what have you.

    Read the article

  • Backup XAMPP (Htdocs & MySQL)

    - by Max
    I have a development server, but would like to backup everything at least daily to a remote location. I would like to backup the htdocs folder and the MySQL servers. But if possible also the settings of the server and anything else relevant. At the moment I am using DropBox for the htdocs, but this is not ideal. I have looked into Git, DropBox simple copy paste on a daily basis. I was wondering what any advice would be. For example how hard would it be to set it up as a cloud based system? Any and all advice is greatly appreciated.

    Read the article

  • Where are my date ranges in Analytics coming from?

    - by Jeffrey McDaniel
    In the P6 Reporting Database there are two main tables to consider when viewing time - W_DAY_D and W_Calendar_FS.  W_DAY_D is populated internally during the ETL process and will provide a row for every day in the given time range. Each row will contain aspects of that day such as calendar year, month, week, quarter, etc. to allow it to be used in the time element when creating requests in Analytics to group data into these time granularities. W_Calendar_FS is used for calculations such as spreads, but is also based on the same set date range. The min and max day_dt (W_DAY_D) and daydate (W_Calendar_FS) will be related to the date range defined, which is a start date and a rolling interval plus a certain range. Generally start date plus 3 years.  In P6 Reporting Database 2.0 this date range was defined in the Configuration utility.  As of P6 Reporting Database 3.0, with the introduction of the Extended Schema this date range is set in the P6 web application. The Extended Schema uses this date range to calculate the data for near real time reporting in P6.  This same date range is validated and used for the P6 Reporting Database.  The rolling date range means if today is April 1, 2010 and the rolling interval is set to three years, the min date will be 1/1/2010 and the max date will be 4/1/2013.  1/1/2010 will be the min date because we always back fill to the beginning of the year. On April 2nd, the Extended schema services are run and the date range is adjusted there to move the max date forward to 4/2/2013.  When the ETL process is run the Reporting Database will pick up this change and also adjust the max date on the W_DAY_D and W_Calendar_FS. There are scenarios where date ranges affecting areas like resource limit may not be adjusted until a change occurs to cause a recalculation, but based on general system usage these dates in these tables will progress forward with the rolling intervals. Choosing a large date range can have an effect on the ETL process for the P6 Reporting Database. The extract portion of the process will pull spread data over into the STAR. The date range defines how long activity and resource assignment spread data is spread out in these tables. If an activity lasts 5 days it will have 5 days of spread data. If a project lasts 5 years, and the date range is 3 years the spread data after that 3 year date range will be bucketed into the last day in the date range. For the overall project and even the activity level you will still see the correct total values.  You just would not be able to see the daily spread 5 years from now. This is an important question when choosing your date range, do you really need to see spread data down to the day 5 years in the future?  Generally this amount of granularity years in the future is not needed. Remember all those values 5, 10, 15, 20 years in the future are still available to report on they would be in more of a summary format on the activity or project.  The data is always there, the level of granularity is the decision.

    Read the article

  • Can't burn 8.1G iso onto 8.4GB DVD - "Media does not have enough free space"

    - by Max Williams
    I'm trying to burn a dvd on a mac with an external (firewire-connected) dvd drive. I'm checking the size of the iso thus: DVD-4:dvd_files macbook$ ls -l /tmp/hybrid.iso -rw-r--r-- 1 macbook wheel 8700884992 Aug 22 10:57 /tmp/hybrid.iso DVD-4:dvd_files macbook$ ls -lh /tmp/hybrid.iso -rw-r--r-- 1 macbook wheel 8.1G Aug 22 10:57 /tmp/hybrid.iso The "human-readable" size is 8.1 Gig but when i try to burn, onto an 8.4G dual-layer dvd, it says "Media does not have enough free space" The definition of a "Gigabyte" according to Wikipedia is 1 billion bytes, so the iso size should actually be 8.7 Gig according to this definition, in which case the disk definitely isn't big enough, and it's just that the -h option to ls is misleading. Is the discrepancy just due to the ls command using a different definition of "G" (eg 1024 Meg aka 1.07 Gig? This comes out as 8.103 which fits what ls is displaying)

    Read the article

  • Do large corporations block jQuery content on web pages?

    - by Max Vernon
    We are currently redesigning our website. The company we've hired to do the redesign is advocating the use of jQuery to render the pages dynamically. Our SEO specialist is under the impression that many larger corporations may have jQuery blocked in their proxies to prevent their users from visiting sites like Facebook. Is this something you are aware of? Forgive me if this is off topic for SF.SE!

    Read the article

  • customer wont provide ssh access - ftp only

    - by Max
    Eh, here is my problem: I am working in a webdevelopment agency (thats a problem but not the real problem, read on). Most of the time I choose the live server myself when creating a new website project. But now the customer already has a "server" (10 GB on a cheapo host!) and the "admin" refuses to give me ssh access to it. But I need to access the server via shell because many files will be transported (need to be able to upload and extract a tar) and I need to insert or create mysql dumps via command line. He argues FTP and phpmyadmin should be enough... as far as I know the webspace was just ordered to host the website, so no security critical apps are running there. How can I either convince the admin to give me the ssh login or tell management that we need our own server? Anyone with similiar experiences? This is really annoying as this is a very small project that should be done fast and now one has to fight in order to just get the work done...

    Read the article

  • Am I safe on Windows if I continue like this?

    - by max
    Of all the available tons of anti-malware software for Windows all over the internet, I've never used any paid solution(I am a student, I have no money). Since the last 10 years, my computers running Windows have never been hacked/compromised or infected so badly that I had to reformat them(of course I did reformat them for other reasons). The only program I have for security is Avast Home Edition, which is free, installed on my computers. It has never caused any problems; always detected malware, updated automatically, has an option to sandbox programs and everything else I need. Even if I got infected, I just did a boot-time scan with it, downloaded and ran Malwarebytes, scanned Autoruns logs, checked running processes with Process Explorer and did some other things and made sure I cleaned my computer. I am quite experienced and I've always taken basic precautions like not clicking suspicious executables, not going to sites which are suspicious according to WOT, and all that blah. But recently I've been doing more and more online transactions and since its 2012 now, I'm doubtful whether I need more security or not. Have I been just lucky, or do my computing habits obviate the need to use any more(or paid) security software?

    Read the article

  • IIS_IUSRS cannot access files uploaded and created by Network Service - error 401.3

    - by Max
    Let me rephrase my question as I investigated further: The problem: I have a php script that is used to upload images on my windows webserver 2008. The files are created in the correct directory. The are created and owned by the user Network Service. Network Service has full access to the uploaded file. As soon as I try to access the uploaded file (mostly an image) via HTTP, I get an 401.3 not authorized error. Now, if I right-click on the not accessible image and grant IIS_IUSRS group read permissions via the security tab, the image can be accessed! By default IIS_IUSRS has NO access at all for the uploaded file. The directory containing the image files has the correct access rights set. But each file that is new uploaded to the directory is permitted for IIS_IUSRS. The question: How can I grant IIS_IUSRS by default access to the newly uploaded file? The appPool of the website has its identity set to its default, I also tried setting it to "networkIdentity" or so, but that did not work either.

    Read the article

  • How to properly secure Windows Server 2008 R2 that will host SQL Server 2012?

    - by Max
    I am a .net programmer trying to create this setup: I want this server to be inaccessible through DMZ accept for IPSEC connections, and to also have a private network which will be accessible through another windows 2008 server which will host vpn. That is how our windows 2003 infrastructure works and I am trying to do the same with 2008 servers, are there any guides or documentations that have this scenario?

    Read the article

  • mongodb replication: no primary elected

    - by Max
    I have three servers with mongod installed on it running as a replication set. Suddenly the two secondories became unavailable (the mongod process died) - I think because they were too stale. The problem is that the original PRIMARY is now the SECONDARY and my application doesn't work because it can't connect to a PRIMARY. I mean, in which way does that help me? If the replica set can't do failover?! Am I missing something? Furhtermore I am asking myself why did the SECONDARIES die / why are they too stale? What can I do about it? FYI: My database is quite big (40GB on disk).

    Read the article

  • Open Office: How to disable image link updates

    - by Max Kielland
    I'm writing a user manual to a card game and there is a looot of linked images. Open Office is working so slow because every time I flip to a page with linked images it starts to update them. Is it possible to tell Open Office to NOT update the links until I tell it to do so? I would like it to display the same snapshot it showed the last time I initiated link update. I'm using Open Office v3.3.0 // Thank you.

    Read the article

  • How can I change the default program installation directory in Windows 7?

    - by Max
    Windows 7 is installed on my C drive, which is quite small. I am very tired of instructing new programs to put their files on my larger D drive during installation; I would like to change the default drive. This article says that you can use a registry hack, but I am giving Microsoft the benefit of the doubt and naively assuming that a configuration option exists somewhere. It's 2010... do I really have to hack my registry to make a simple tweak like this? Also, there's a ServerFault question that explains how to move the "Users" directory and create a symlink, which could also work. However, at the moment I have some apps in C:\Program Files, some apps in C:\Program Files (x86), and some apps in the corresponding folders on D:\, so it would be a hassle. Also, my small OS boot drive is a 10k RPM WD Raptor, and I feel like that probably gives a speed boost to apps installed on it that need to read & write to their directories a bunch. I wonder if it actually matters.

    Read the article

  • What is a good topic for a research paper on modern computer architecture?

    - by Max Schmeling
    This may not be the right place for this, but I wanted to get this question in front of some of the brightest people on the internet, so I thought I'd give it a shot. I have to write a research paper on some modern aspect of computer architecture. The subject is really not very restrictive; pretty much any recent development in computer hardware will work. I want to write it over something really interesting, but I don't have a lot of good ideas. What would make a really interesting paper?

    Read the article

  • How do I create yum repo file?

    - by max
    I know there is a previously asked question, but I still have some doubts so asking again. How do I create a yum repo file? I know that in the /etc/yum.repos.d/ I have to create .repo file. Below is the pattern: 1 [name ] 2 name= 3 baseurl= 4 enabled=1 5 gpgcheck=1 6 gpgkey= Here in the baseurl which link should I give? I'm fully confused about this. How do I get that baseurl link? Can anyone please explain to me clearly? I am using CentOS 6.2.

    Read the article

  • How do I turn a Wi-Fi "hotspot" into a local wired network?

    - by Max Schmeling
    Here's the situation: In a remote "office" I have a computer with no network connection, that I need to network with when I'm at this remote office. There is a wireless network where this computer is, but no wireless adapter in the computer. I have a laptop running Windows 7 that can connect to the wireless, and the computer is running Windows Vista. What is the best way to get them both connected? I know I can buy a USB wireless adapter or something for the computer, but is there an easy way to do it with what I've got?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >