Search Results

Search found 25386 results on 1016 pages for 'zend test'.

Page 627/1016 | < Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >

  • SQL Server backup/restore error: The Media Family on Device is Incorrectly Formed.

    - by Chris
    Basically, I'm having this issue: http://www.sqlcoffee.com/Troubleshooting047.htm What I'm doing is running a script I found online (http://pastebin.com/3n0ZfybL) to do a full backup, then rar'ing up the file and moving it to my computer. The CRC of the backup file inside the rar is correct on both computers, so there is no problem with data being corrupted when I transfer it. But then I go and try to restore the database on my dev computer here and I get the errors "sql server cannot process this media family" ... "msg 3013". Why is this happening? I'd test out the backup on the server I'm getting it from, but it's a production server.

    Read the article

  • Ubuntu sound volume is 40% lower than in Windows

    - by ncomx
    I have 2x 2.1 speakers connected to the computer where I have Ubuntu 12.04 installed. On the software side I've set all the volume controls to 100% with the alsamixer program. The speakers have their own volume control, maintaining those at the same level, and switching between Ubuntu and Windows (XP and 7), on windows the output volume is at least 40% higher, even when having the windows volume control at 50% (without touching the speakers volume control) it's still much higher than the sound on Ubuntu. Why can this be happening? Are there some alternative sound drivers (other than the default ones) I could test to see if it makes a difference? some info about the card: root:$ cat /proc/asound/cards 0 [PCH ]: HDA-Intel - HDA Intel PCH HDA Intel PCH at 0xfbff4000 irq 55 1 [Generic ]: HDA-Intel - HD-Audio Generic HD-Audio Generic at 0xfbcfc000 irq 56 root:$ lspci | grep -i audio 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04) 02:00.1 Audio device: Advanced Micro Devices [AMD] nee ATI Cayman/Antilles HDMI Audio [Radeon HD 6900 Series] I think the one i am using is the Intel one, the other seems to be from the vga card which is an ati radeon 6950. Running gstreamer-properties and switching between alsa, oss, ossv4 and pulseaudio doesn't seem to make any difference.

    Read the article

  • Network connectivity issues with Windows Store

    - by Duy Tran
    I have my Windows 8 Pro build 9200 installed on my Dell laptop. I want to install some new apps and updates from the Store but there might be some network problem that caused the downloading gauge showing up but not really running at all. I followed some instructions that switched from local user to my Microsoft account, but this "Please wait" screen keeps showing and I don't really know why. I still have internet access and can use some apps like People, Mail, etc. with my account logged in, I can surf the net using Firefox, Chrome and Internet Explorer. I did another test using cmd with ping -t google.com and it showed that my laptop has internet access. Anybody knows a solution to make the Store working properly? Or is there any workaround to switch to the Microsoft account instead of a local user account?

    Read the article

  • Send mail from a distribution group's email address

    - by Campo
    A user has send permission on a distro group on a WINDOWS SERVER 2003 domain. I am the admin. When either of us sends email using the distribution group's email adress we get a non delivery report Your message did not reach some or all of the intended recipients. Subject: TEST Sent: 4/19/2010 4:46 PM The following recipient(s) cannot be reached: [email protected] on 4/19/2010 4:46 PM You do not have permission to send to this recipient. For assistance, contact your system administrator. MSEXCH:MSExchangeIS:/DC=local/DC=DOMAIN:SERVERNAME Thanks, JC

    Read the article

  • Is FreeBSD more suitable than CentOS for firing 40k concurrent connections (for Jmeter)?

    - by blacklotus
    Hi, I am trying to run Jmeter to simulate 40k concurrent users and stress test a particular system. Putting aside the possibility that Jmeter may not be able to push such a high number (although I have read that it is at least possible to handle 10k concurrent threads on a very powerful machine), is FreeBSD a more suitable OS as compare to CentOS to be used for my Jmeter machine for handling 40k (or as high as possible) of concurrent outbound connections? Reason for asking this is that, I have found articles on FreeBSD for tuning and optimizing for maximum outbound connections, but seem to have little luck with CentOS. It makes me wonder if for some specific reasons, people don't use CentOS for such high number of outbound connections. Personally however, I am more familiar with CentOS and would like to stick with it if possible. Any input is greatly appreciated!

    Read the article

  • Soft sound problem on Sony Viao NetBook in GTalk and GMail Chat

    - by mx4399
    I have a new Sony Viao netbook and have a problem with the sound - or lack of loudness thereof. When using GTalk or chat in GMail with earphones all is OK but when using the netbooks' built in speakers the sound is very very faint (also in the GMail Chat test section). I installed the Sony sound drivers and checked all the sound settings, VLC also plays music at an acceptable level but still not as load as my old MacBook did. All settings are set at 100% - but are still too much too soft to use. Anyone got an idea?

    Read the article

  • Spoofing domains - using one domain to look at another without frame redirect

    - by hfidgen
    In Plesk 9.2.2 does anyone know how the following can be achieved? I've got domain1.co.uk registered in plesk, but the domain has not been set up with any nameservers or A records, so it is unreachable from the web. However, I need to test it while we get the domain1.co.uk nameservers etc sorted over the next week or so. SO, i've got sparedomain.co.uk registered, with the nameservers and A records pointing to the server, and sure enough it displays the default plesk "theres no website here yet page" . bingo. Now, how can I set up sparedomain.co.uk on my plesk server, so it displays all the data held on the plesk account for domain1.co.uk? Frame forwarding doesnt work - because you get errors saying "domain1.co.uk cannot be found" in your browser - i need a server solution to spoof it all. Anyone got any ideas? Thanks!

    Read the article

  • Cannot type backquote or backtick in xterm

    - by Cocoro Cara
    Ubuntu 10.10, XTerm(261), Keyboard layout = Canadian Somehow, the backquote (backtick = `) character can't be input does not get entered in XTerm. I type it and nothing happens. The cursor does not move forward. I know it works because I can input it in Terminal (gnome-terminal). The only strange thing is that I have to type the key twice for it to appear. Just to test it, I tried typing it in other applications, and the same thing happens. Have to type it twice in FF, gedit, etc. One more strange thing, I could not input it into this textbox in which I am typing this message. But I can input it in the URL bar, search bar, etc. Someone please help me solve this mystery. I like to use XTerm and I need the backquotes.

    Read the article

  • How can browsers in VMs resolve hostnames of websites on parent PC?

    - by elliot100
    I have a number of local websites in development on my Windows PC, set up as virtual hosts within Apache, with hostnames (along the lines of dev.example.com) resolved via the hosts file, so I can test them out them with various browsers. I now want to extend browser testing to running browsers in various OSs in virtual machines, and want to be able to resolve dev.example.com from the VMs. Currently these are a mix of VMWare Server and VirtualPC. I know I can edit the hosts file on any Windows VMs, but this is a bit fiddly and I'd like a solution which is independent of the individual VMs. I think what I need is a nameserver, but what's the simplest way of going about this? I'd like everything to be self-contained on the one machine. I think I can cover firewall and Apache permissioning issues.

    Read the article

  • htaccess order Deny,Allow rule

    - by aspiringCodeArtisan
    I'd like to dynamically add IPs to a block list via htaccess. I was hoping someone could tell me if the following will work in my case (I'm unsure how to test via localhost). My .htaccess file will have the following by default: order allow,deny allow from all IPs will be dynamically appended: Order Deny,Allow Allow from all Deny from 192.168.30.1 The way I understand this is that it is by default allow all with the optional list of deny rules. If I'm not mistaken Order Deny,Allow will look at the Deny list first, is this correct? And does the Allow from all rule need to be at the end?

    Read the article

  • ms access template where to find the vb code

    - by tintincutes
    I'm very new to this ms access 2007. I have a copy of a charitable contribution template charitablecontributions.accdb. I would like to know where I can find the code of it? I opened it by holding down the shift button and double click on it and it will open to a normal ms access where you can modify the table and so. But when I just click the file: charitablecontributions.accdb it will open to a formular view where the ribbon bars are gone. I wanted to know how, how is it possible to have this formular view? Because I have a Test.mdb from 2003 and I also would like it to be open like the formular view of charitablecontributions.accdb. I'll appreciate your help. Thanks

    Read the article

  • Windows Embedded Compact 7

    - by Valter Minute
    This will be the official name of the new release of Windows CE. Windows Embedded Compact 7 is available as a public CTP and it already supports a wide range of CPUs and both the device emulator and VirtualPC emulated environments. So I’ll have to learn a new (and longer) name for my favorite OS… but I (and all my two readers!) will be able to test it as soon as the download from connect web site completes (I'm sorry for my readers, but you'll have to download it by yourselves). Here’s a link for the download (it's free but you’ll have to register on connect with a valid LiveId): https://connect.microsoft.com/windowsembeddedce Remember that this is still a beta (or “Community Technology Preview” if you speak marketing language) and so it’s better to not install it on your main development PC (or, at least, backup everything before installation) and that the features and performances you’ll get from this beta may not be the same ones of the final release of the OS. You can discover the new features of Windows Embedded Compact on the new “official” webpage on microsoft website: http://www.microsoft.com/windowsembedded/en-us/products/windowsce/compact7.mspx or on Olivier’s blog: http://blogs.msdn.com/b/obloch/archive/2010/06/01/windows-embedded-compact-7-announced-and-public-ctp-available.aspx I hope to be able to post some interesting content about Windows Embedded Compact 7 soon (and maybe be able to shorten it’s name in CE7 in my blog posts, when I'll ensure that both my readers are not worketing for Microsoft's marketing department …). Technorati Tags: "Windows Embedded Compact 7"

    Read the article

  • Scripting language for filling out web form

    - by ityler22
    I have a job as an intern at a technology company, I was given the unfortunate job of performing some data entry into our web management system. The information entered into the web form is stored in a MySQL DB. Upon receiving the data I realized I would have to submit this online form about 1000 different times all consisting of about 10 different text fields / check boxes per form. (So in other words, would be completely mind numbing and be a ridiculous waste of time and resources, or so I thought...) Having used databases a good bit prior to this, my immediate reaction was to just write a short MySQL script to bulk import all of the data, especially since it was already presented to me in an excel spreadsheet ready to go. Thought it may have been some sort of a test since it seemed too obvious. I wrote the script which consisted of about 10 lines of code but was then informed I couldn't be trusted with MySQL Admin privileges to run said script. So my next thought would be to write a script to just enter the information through the web form (Which will take ten times longer but it's what I have to) Being unfamiliar with scripting of this nature (seems like I would need something similar to a bot, but the good kind) I was unsure of how to proceed to do this. Is there a preferred language to use to enter the data i have into the web form I do have access to? I'm not particularly looking for this to be done for me by any means just a nice point in the right direction as far as what scripting language to use and how to pair that with the data I have that needs to be entered. Thanks for the help/ valuable input! EDIT: Is there a way to perform this using perl without having access to place any files on the server? Would I be able to run some Javascript loops to pull the data out of .csv or just a .txt format with line delimiters and insert it into the web form?

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • MySql calculate number of connections needed

    - by Udi I
    I am trying to figure my needs regarding web service hosting. After trying Azure I have realized that the default MySql they provide (through a third party) limits the account to 4 connections. You can then upgrade the account to 15, 30 or 40 connections (which is quite expensive). Their 15 connections plan is descirbed as: "Excellent choice for light test and staging apps that need a reliable MySQL database". I have 2 questions: if my application is a web service which needs to preform ~120k Queries a day (Normal/BELL distribution) and each query is ~150ms(duration)/~400ms(fetch), how many connection do I need? If instead of using cloud computing, I will choose a VPS, how many connections will I be able to handle on a 1GB 2 cores VPS? Thank you!

    Read the article

  • Cannot use second display with 12.04 and Intel 2000/3000

    - by Carolyn Marenger
    I am unable to get anything to display on my second monitor, or even get the system to recognize that there is a second screen. I am running Ubuntu 12.04 on a Gigabyte GA-H61M-S2PV, revision 2.0 box. The integrated chipset is an Intel 2000/3000, and there are both a D-Dub and a DVI-D display ports on the MB. This is the first operating system I have installed on this system. I have a second monitor plugged into the DVI-D port via a DVI-D to D-Sub adapter. I cannot verify that the motherboard or adapter were/are working, short of installing windows to test the theory. When I go into the "System Settings - Displays" control window, it shows one display. I have rebooted with the second monitor attached, and I have perused the BIOS settings in case it might have been disabled. So far, I have had no indication that the second monitor is recognized, not even a flicker at power on. If I swap monitors and cables between the DVI-D and D-Dub ports, the other monitor lights up, so I know the monitor and video cable are not the issue. Any suggestions would be appreciated. Thanks, Carolyn

    Read the article

  • YSLow says certain CSS are not gzipped

    - by rhand
    YSlow keeps on telling me files like http://www.example.com/wp-content/plugins/q-and-a/css/q-a-plus.css?ver=1.0.6.2 are not gzipped while the gzip test tool at Feed the Bot mentions I am all good: Compressed? Yes Compression type gzip Page size (Bytes) 32,493 Compressed size (Bytes) -1 Saving (Bytes) 32,494 Compression % 100% I added this to my .htaccess: # Gzip <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file .(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> #Deflate <ifmodule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript </ifmodule> The header for the file mentioned states: CF-Cache-Status MISS CF-RAY 13945df90a9a0c1d-AMS Cache-Control public, max-age=2592000 Connection keep-alive Content-Encoding gzip Content-Type application/javascript Date Thu, 12 Jun 2014 07:34:38 GMT Expires Sat, 12 Jul 2014 07:34:38 GMT Last-Modified Thu, 21 Feb 2013 01:29:18 GMT Server cloudflare-nginx Transfer-Encoding chunked Vary Accept-Encoding Any ideas what I am missing here?

    Read the article

  • Can I automatically add a new host to known_hosts ?

    - by gareth_bowles
    Here's my situation; I'm setting up a test harness that will, from a central client, launch a number of virtual machine instances and then execute commands on them via SSH. The virtual machines will have previously unused hostnames and IP addresses, so they won't be in the ~/.ssh/known_hosts file on the central client. The problem I'm having is that the first SSH command run against a new virtual instance always comes up with an interactive prompt: The authenticity of host '[hostname] ([IP address])' can't be established. RSA key fingerprint is [key fingerprint]. Are you sure you want to continue connecting (yes/no)? Is there a way that I can bypass this and get the new host to be already known to the client machine, maybe by using a public key that's already baked into the virtual machine image ? I'd really like to avoid having to use Expect or whatever to answer the interactive prompt if I can.

    Read the article

  • Vetting Github Pull requests with Hudson

    - by cdecker
    I've been using Gerrit and Hudson very successfully to test and automatically vote on new checkins in the past and now I'm wondering whether it is possible to set up Hudson so that it'll check Github at regular intervals and looks if there are new Pull Requests available. If yes it should apply the patch and run the unit tests against it, adding a comment to the pull request if no failure is detected. It would certainly reduce the amount of work going into vetting patches/pull requests. Is that possible at all, or should I stick with my Gerrit setup?

    Read the article

  • Connect Lenovo Thinkpad Edge e120 to WQHD display (such as Dell U2711) through HDMI

    - by Fulvio
    On paper, the HDMI version of the Lenovo Thinkpad Edge e120 is 1.3, which supports WQHD. Does it actually work? I want to connect the e120 to a WQHD display, such as the Dell U2711 or an Apple Cinema Display. I run Windows 7 Ultimate and latest drivers. The e120 I think has the Intel HD Graphics 3000 onboard chipset. I don't have the chance to test out the e120 with neither of these displays, so I'm seeking for an opinion from those who have tried. Thank you

    Read the article

  • How can I simulate a slow machine in a VM?

    - by Nathan Long
    I'm testing an AJAX-heavy web-application. I develop on a new Mac, but I use VmWare Fusion (currently 3.1.2) to test in Windows XP, using IETester to simulate older versions of IE. This lets me see how older IE versions would render the site, but I'd also like to see how the site would perform on an older machine. I see in the VM's settings that I can decrease the RAM; is there a way to also dial down the processor speed? How else might I simulate a slow machine? (I am also going to check out how to simulate a slow internet connection.)

    Read the article

  • Linux wall command won't broadcast strings

    - by mjb
    I read here that this should work, but it doesn't: //usage: wall [file] root@sys:~> mesg is y root@sys:~> wall "who's out there" wall: can't read who's out there. If mesg is set to y, what's preventing me from broadcasting a string? Note, I did confirm that the file option works: root@sys:~> wall test Broadcast Message from root@sys (/dev/pts/1) at 15:23 ... Who's out there? Teach me knowledge please. mjb

    Read the article

  • Collision detection with non-rectangular images

    - by Adam Smith
    I'm creating a game and I need to detect collisions between a character and some parts of the environment. Since my character's frames are taken from a sprite sheet with a transparent background, I'm wondering how I should go about detecting collisions between a wall and my character only if the colliding parts are non-transparent in both images. I thought about checking only if part of the rectangle the character is in touches the rectangle a tile is in and comparing the alpha channels, but then I have another choice to make... Either I test every single pixel against every single pixel in the other image and if one is true, I detect a collision. That would be terribly ineficient. The other option would be to keep a x,y position of the leftmost, rightmost, etc. non-transparent pixel of each image and compare those instead. The problem with this one might be that, for instance, the character's hand could be above a tile (so it would be in a transparent zone of the tile) but a pixel that is not the rightmost could touch part of the tile without being detected. Another problem would be that in different frames, the rightmost, leftmost, etc. pixels might not be at the same position. Should I not bother with that and just check the collisions on the rectangles? It would be simpler, but I'm afraid people.will feel that there are collisions sometimes that shouldn't happen.

    Read the article

  • How to approach scrum task burn down when tasks have multiple peoples involvement?

    - by AgileMan
    In my company, a single task can never be completed by one individual. There is going to be a separate person to QA and Code Review each task. What this means is that each individual will give their estimates, per task, as to how much time it will take to complete. The problem is, how should I approach burn down? If I aggregate the hours together, assume the following estimate: 10 hrs - Dev time 4 hrs - QA 4 hrs - Code Review. Task Estimate = 18hrs At the end of each day I ask that the task be updated with "how much time is left until it is done". However, each person generally just thinks about their part of it. Should they mark the effort remaining, and then ADD the effort estimates to that? How are you guys doing this? UPDATE To help clarify a few things, at my organization each Task within a story requires 3 people. Someone to develop the task. (do unit tests, ect...) A QA specialist to review task (they primarily do integration and regression tests) A Tech lead to do code review. I don't think there is a wrong way or a right way, but this is our way ... and that won't be changing. We work as a team to complete even the smallest level of a story whenever possible. You cannot actually test if something works until it is dev complete, and you cannot review the quality of the code either ... so the best you can do is split things up into small logical slices so that the bare minimum functionality can be tested and reviewed as early into the process as possible. My question to those that work this way would be how to burn down a "task" when they are setup this way. Unless a Task has it's own sub-tasks (which JIRA doesn't allow) ... I'm not sure the best way to accomplish tracking "what's left" on a daily basis.

    Read the article

< Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >