Search Results

Search found 24715 results on 989 pages for 'output parameters'.

Page 584/989 | < Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >

  • Is the Zune HD's audio card better or worse than the iPod touch's?

    - by MatthewThepc
    Firstly, if this is the wrong site to ask this question I apologize, but I didn't see one for "music players" on the stack exchange website :) After reading a few people online say that music playing from a Zune HD sounds better to them than that on an iPod touch, I was wondering whether there's any truth to that? From what I can tell, the Zune HD uses a Wolfson Microelectronics WM8352, while the first-generation iPod Touch (which the Zune HD was competing with) used a Wolfson Microelectronics WM8758BG, and newer models use the Cirrus Logic CS4398 and CS42L61. Which ones are better (to make the question less subjective, let's say in terms of quality, range, & accuracy of output)? Admittedly, I have almost no idea how everything compares and works together, but it would seem to me that, just by looking at the version numbers, the iPod has been better since it's launch. Is there anything else that effects sound quality? Thanks!

    Read the article

  • Empty interface to combine multiple interfaces

    - by user1109519
    Suppose you have two interfaces: interface Readable { public void read(); } interface Writable { public void write(); } In some cases the implementing objects can only support one of these but in a lot of cases the implementations will support both interfaces. The people who use the interfaces will have to do something like: // can't write to it without explicit casting Readable myObject = new MyObject(); // can't read from it without explicit casting Writable myObject = new MyObject(); // tight coupling to actual implementation MyObject myObject = new MyObject(); None of these options is terribly convenient, even more so when considering that you want this as a method parameter. One solution would be to declare a wrapping interface: interface TheWholeShabam extends Readable, Writable {} But this has one specific problem: all implementations that support both Readable and Writable have to implement TheWholeShabam if they want to be compatible with people using the interface. Even though it offers nothing apart from the guaranteed presence of both interfaces. Is there a clean solution to this problem or should I go for the wrapper interface? UPDATE It is in fact often necessary to have an object that is both readable and writable so simply seperating the concerns in the arguments is not always a clean solution. UPDATE2 (extracted as answer so it's easier to comment on) UPDATE3 Please beware that the primary usecase for this is not streams (although they too must be supported). Streams make a very specific distinction between input and output and there is a clear separation of responsibilities. Rather, think of something like a bytebuffer where you need one object you can write to and read from, one object that has a very specific state attached to it. These objects exist because they are very useful for some things like asynchronous I/O, encodings,...

    Read the article

  • Custom Key Flexfield (KFF) in Oracle Applications

    - by Manoj Madhusoodanan
    In this blog I will explain how to create a custom KFF.I am using XXCUST_KFF_DEMO table to capture the KFF code combinations. Following steps needs to perform. 1) Register the XXCUST_KFF_DEMO table.Click here to see the code. Verify the table has created successfully. Navigation: Application Developer > Application > Database > Table 2) Register the Key Flexfield. Navigation : Application Developer > Flexfield > Key Flexfields 3) Define the structure and segments.  Navigation:  Application Developer > Flexfield > Key Flexfield Segments Click on Segments button. Save the created Information.Check the Allow Dynamic Inserts check box if you want to create the combination from the KFF display window. Once you complete all the changes check the Freeze Flexfield Definition check box. 4) Create a sequence XXCUST_KFF_DEMO_S. 5) Try to create KFF item through OAF or Forms. Here I am using a page based on table XXCUST_KFF_TRN. You can see the output below.

    Read the article

  • Alternative to the tee command whitout STDOUT

    - by aef
    I'm using | sudo tee FILENAME to be able to write or append to a file for which superuser permissions are required quite often. Although I understand why it is helpful in some situation, that tee also sends its input to STDOUT again, I never ever actually used that part of tee for anything useful. In most situations, this feature only causes my screen to be filled with unwanted jitter, if I don't go the extra step and manually silence it with tee 1> /dev/null. My question: Is there is a command arround, which does exactly the same thing as tee, but does by default not output anything to STDOUT?

    Read the article

  • How to diagnose Ubuntu CPU spikes / IO wait?

    - by Jeff Welling
    I'm using Ubuntu and every couple minutes it goes unresponsive for a half second to a full second, which isn't normally a problem but makes trying to code extremely frustrating when your trying to hit backspace or navigate the code and nothing is happening. The problem is, the freezes are so brief that top doesn't have time to show me what is spiking the CPU (assuming something is, but I don't know what else could cause this). Does anyone know how to troubleshoot this performance issue? Edit: I've tried login in with Gnome Classic (No Effects) instead of Unity but it still freezes up every once in awhile. Edit: The CPU graph doesn't seem to be showing any actual spikes so it seems you were right and my original diagnosis of CPU spikes being the problem was incorrect, I now suspect IO wait. I don't recall this happening for the brief few weeks I had Windows 7 Starter running on it though, which leads me to believe it isn't (just?) the hardware.. is there anything I can tweak to improve this? I'm using an Acer Aspire One D257, with Ubuntu 11.10. Edit: Output of dmesg is at http://paste.ubuntu.com/1060054/ and kern.log is at http://paste.ubuntu.com/1060055/

    Read the article

  • Ubuntu 12.04.1 LTS and Nvidia dirver (304.51) 64bit: problem 640x480

    - by nibianaswen
    I have a problem with this configuration: Asus K55V, Ubuntu 12.04 LTS and Nvidia driver 304.51. I have remove the nouveau driver with: apt-get --purge remove xserver-xorg-video-nouveau I installed the official nvidia driver (from www.nvidia.com) but when I reboot the PC the resolution of screen is only 640x480 and the monitor is resized. Mo solution at this problem if i change the xorg.conf. Now i have uninstall the nvidia driver and reinstall with sudo apt-get purge nvidia-current sudo apt-add-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current When I reboot the screen resolution and size is OK, but if I start nvidia-setting I received the message: You do not appear to be using the NVIDIA X driver. and with command: sudo lshw -c display | grep driver I received configuration: driver=i915 latency=0 This sound like the system is using the Intel card. When I launch command lspci | grep VGA the output is: 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1058 (rev ff) And there is no /etc/X11/xorg.conf. I have read a lot of guides on internet but without success.. How i can use nvidia card with the driver that i have installed?

    Read the article

  • Best FFDShow settings for upscaling SD content to 1080p HD?

    - by Richard
    Hi there, I'm running Windows Media Center 7 with ffdshow-tryouts for the decoding of many of the popular video formats. It works great. I've now upgraded my television from SD to 1080p HD and, naturally, I've still got a large number of existing MP4/XviD/DivX items of content which is in SD. I'd like, therefore, to modify the settings of ffdshow so that they are upscaled to 1080p as best as possible. I appreciate that they won't be as good as their HD equivalent - but on the flip side, I'm pretty certain I can do more than just resizing the picture to get the best possible output. Can anyone recommend the best settings in ffdshow to do this? For example, should I apply a sharpen mask? Or Noise Reduction? Or Deinterlace? Alternatively, would it be better to keep them at their current resolution and let the TV (Samsung Series 5 LE32C580) do the upscaling? Thanks.

    Read the article

  • Django HttpResponseRedirect acting as proxy rather than 302

    - by Trevor Burnham
    I have a Django method that's returning return HttpResponseRedirect("/redirect-target") When running the server locally, if I visit the page that returns that redirect, I get the log output [17/Oct/2013 15:26:02] "GET /redirecter HTTP/1.1" 302 0 [17/Oct/2013 15:26:02] "GET /redirect-target HTTP/1.1" 404 0 as expected. But, when I visit that page in Chrome, the Network tab shows the request to /redirecter with the response from /redirect-target, rather than showing the 302. cURL does the same: $ curl -I -X GET http://localhost/redirecter HTTP/1.1 404 Not Found date: Thu, 17 Oct 2013 19:32:30 GMT connection: keep-alive transfer-encoding: chunked In production, the same Django code does show a 302 redirect in Chrome and cURL. What could be going on here? Is there some kind of Django setting that might be causing it to proxy the target rather than send a redirect when HttpResponseRedirect is used (but lie about it in the log)? Or is there a quirk on my system (OS X) that might cause localhost redirects to behave this way?

    Read the article

  • Google suddenly only indexes https and not http

    - by spender
    So all of a sudden, searches for our site "radiotuna" give out the result as an HTTPS link. https://www.google.com/?q=radiotuna#hl=en&safe=off&output=search&sclient=psy-ab&q=radiotuna&oq=radiotuna&gs_l=hp.12...0.0.0.3499.0.0.0.0.0.0.0.0..0.0.les%3B..0.0...1c.LnOvBvgDOBk&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.&fp=177c7ff705652ec3&biw=1366&bih=602 We only use https for the download of two specific files (these urls are resources used for autoupdate functionality of an app we distribute). All other parts of the site should be served over http. We wouldn't like to see any other traffic over https, nor any of our site links to appear in search engines as https. I'd like to address this issue. It seems that the following solutions are available: hand out an https specific robots.txt as such: User-agent: * Disallow: / and/or at app-level, 301 permanent redirect all requests (except the two above) to HTTP if they come in as HTTPS. My concern with the robots method is that, say (for some reason) google decided not to index http pages, disallowing https pages might mean that google has nothing left to index with disastrous consequences for our ranking. This means I'm inclined to go with a 301 redirect. Any thoughts?

    Read the article

  • Problem reinstalling GRUB

    - by DisgruntledGoat
    I have a PC that dual-boots Ubuntu and Windows Vista. I recently reinstalled Windows Vista after some problems and now the bootloader's gone. I've been trying to follow this Ubuntu community guide but it's not working. I have Grub Legacy according to the first part (I installed Ubuntu 9.04 originally then upgraded). From the 9.04 LiveCD, I ran this: sudo grub-install --root-directory=/media/disk /dev/sda5 sda5 is the Ubuntu partition. I get this output: grub-probe: error: Cannot open `/boot/grub/device.map` [: 494: =: unexpected operator Installing GRUB to /dev/sda5 as (hd0,4)... Installation finished.No errors reported. This is the contents...(etc) (hd0) /dev/sda In the bit below, when I run setup (hd0) I get an error, "Error 17: Cannot mount selected partition" Little help?

    Read the article

  • Broadcom BCM4313 takes ages to connect

    - by Drazgo
    I'm having issues with my broadcom BCM4313 wireless adapter. Everything works just fine when connected (with additional drivers & Connman), but it takes about 5 minutes to connect to my network when i just started my computer! When resuming from hibernation it goes very quick though, so just when I boot my pc it's taking forever... This is what I found in the dmesg output: [ 16.778057] eth1: Broadcom BCM4727 802.11 Hybrid Wireless Controller 5.60.48.36 [ 16.808768] type=1400 audit(1295859939.727:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient3" pid=833 comm="apparmor_parser" [ 16.808815] type=1400 audit(1295859939.727:3): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient3" pid=799 comm="apparmor_parser" [ 16.808825] type=1400 audit(1295859939.727:4): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient3" pid=826 comm="apparmor_parser" [ 16.809367] type=1400 audit(1295859939.727:5): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=833 comm="apparmor_parser" [ 16.809415] type=1400 audit(1295859939.727:6): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=799 comm="apparmor_parser" [ 16.809435] type=1400 audit(1295859939.727:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=826 comm="apparmor_parser" [ 16.809705] type=1400 audit(1295859939.727:8): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=833 comm="apparmor_parser" [ 16.809755] type=1400 audit(1295859939.727:9): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=799 comm="apparmor_parser" [ 16.809769] type=1400 audit(1295859939.727:10): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=826 comm="apparmor_parser" [ 16.844083] alloc irq_desc for 22 on node -1 [ 16.844087] alloc kstat_irqs on node -1 Any ideas how come? Thanks in advance!

    Read the article

  • Does CPIO produce platform dependant archives?

    - by TiCL
    I made a CPIO archive with following command on Solaris 11 (SPARC): find . | cpio -ov >/tmp/myarchive.cpio I copied it to Intel based Solaris 11 machine and tried to extract using the following command cpio -icvdu < myarchive.cpio It gives me following error: cpio: Not a cpio file, bad header. 1 errors The MD5SUM hash matches and I can extract it on another SPARC machine. My question, does CPIO produce platform dependant output? Is there any way to convert it? I cannot use TAR at this moment, because the directory I am archiving has long symbolic links that are skipped by TAR command

    Read the article

  • Need Boxee/Vista remote control recommendations

    - by gurun8
    I have an Acer Aspire X1200 (specs link below) desktop computer that I've installed Boxee on and it works great. And it looks like it's going to be a perfect computer for a media player. It's small with lots of hard drive space and even has a HDMI output on it. http://support.acer.com/acerpanam/desktop/0000/Acer/AspireX1200/AspireX1200sp2.shtml I also have a Harmony® 880 Advanced Universal Remote (specs link below) that I'd like to use with the Acer computer. www.logitech.com/en-us/440/372 My thought is that I need to get a USB remote control adapter, right? So I did a quick Google search and found products from $5 to hundreds of $$$. Does anyone have recommendations? Things you like/dislike about certain remotes? This Acer is running Vista and has a AMD LIVE!: Athlon X2 dual-core processor. 64 bit some times makes it a little more difficult. Thanks for the help.

    Read the article

  • puppet restart service failure

    - by Roman
    help me please with service restart # changing iptables` file { "/etc/sysconfig/iptables": ensure => "present", content => template("all_in_one/iptables.erb"), owner => root, group => root, mode => 600, } service { "iptables": ensure => running, enable => true, hasstatus => true, hasrestart => true, subscribe => File["/etc/sysconfig/iptables"] } output is: err Failed to call refresh: Could not restart Service[iptables]: Execution of '/sbin/service iptables restart' returned 1

    Read the article

  • Install on Acer Aspire 4752

    - by user216962
    I am at my wits end with this computer. I bought and Acer Aspire 4752 with a fully loaded version of Windows 7 on it. I prefer Ubuntu so I began to install 14.04 from USB. Got the error: [Errno 5] Input/output error This is often due to a faulty CD/DVD disk or drive, or a faulty hard disk. It may help to clean the CD/DVD, to burn the CD/DVD at a lower speed, to clean the CD/DVD drive lens (cleaning kits are often available from electronics suppliers), to check whether the hard disk is old and in need of replacement, or to move the system to a cooler environment. So I tried a different USB stick, same error. Tried different versions of Ubuntu, got the same error. I've used startup disk creator and Unetbootin to make start USB boot devices. I can boot with the USB drive and run Ubuntu that way. I even checked the hard drive using the tools in Ubuntu. Everything was fine, except it said the hard drive was hot. I tried a different hard drive. Got same error above. I ran a test with mem86, everything was fine. No matter what I do, using the USB gives me the Errno5 error. I then switched to using DVDs. Now I keep getting an uncompression error when installing Ubuntu 14.04 or 12.04. I can't figure out for the life of me why I get nothing but errors. Can anyone help?

    Read the article

  • How does azure memory usage work?

    - by Jed Grant
    I have a windows azure website. In the dashboard it shows me that I have used 1.51 GB of the 2GB available per hour. I keep increasing the number of instances available in the shared node so the site doesn't shut down. After each hour finishes, the memory usage still shows 1.51 GB used. I assume this would start at ZERO and then be used as time goes on, but that doesn't appear to be the case. How does server memory work? What are some reasons my application using this much memory? (I use no output caching and generally have just built off of the basic MVC templates provided in visual studio.) What other considerations should I be making to get the amount of memory needed to decrease?

    Read the article

  • Missing disk space in Windows XP

    - by Jørn Schou-Rode
    On my mother's Lenovo laptop, Windows XP claims that the hard drive is almost full. According to the properties window, 52.7 out of 55.2 GB is in use: By deleting temp files from Internet Explorer, System Restore, Recycle bin, Windows Update, System Cleanup, I managed to free up about one GB. That's still 50 GB in use, which still is a lot more than I expected. Hence, I gave good old WinDirStat a spin, and here's the output: It might be hard to read here, but the first line says that the total amount of disk space in use on drive C is 24.3 GB. So Windows claims usage of 52.7 GB and WinDirStat can only account for 24.3 GB. Where is the other half of that disk space being used? I hope someone has an answer, or some tricks or tips to do further research. UPDATE: The laptop in question has an SSD hard drive. I am aware that these disk (at least the earlier ones) have a limited life-time. Could the symptoms described be caused by wear and tear on the SSD?

    Read the article

  • Lost WiFi after 12.10 upgrade

    - by Steven Guillory
    I received my new Dell Vostro 2420 last week, and just got around to upgrading from 11.10 to 12.10. Unfortunately, like many others (after researching the issue), I no longer have WiFi. I have tried every sudo command given that worked for others, and still can't get my wireless to function. I am new to Linux, so any and all help is appreciated. Thanks in advance! Edit: I can connect via ethernet, just not via wifi. As a matter of fact, when I use Fn + F2 to turn on wifi, only my bluetooth comes on. lspci 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.3 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 4 (rev c4) 00:1c.5 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 6 (rev c4) 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 07:00.0 Network controller: Broadcom Corporation Device 4365 (rev 01) 09:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 07) This is what I am getting... dpkg: error: --install needs at least one package archive file argument Type dpkg --help for help about installing and deinstalling packages [*]; Use dselect or aptitude for user-friendly package management; Type dpkg -Dhelp for a list of dpkg debug flag values; Type dpkg --force-help for a list of forcing options; Type dpkg-deb --help for help about manipulating *.deb files; Options marked [*] produce a lot of output - pipe it through less or more !

    Read the article

  • Solaris 11.1 smb share pam.conf

    - by websta
    I would like to enable an SMB share on Solaris 11.1 x64 My steps: pkg install service/filesystem/smb svcadm enable -r smb/server echo "other password required pam_smb_passwd.so.1 nowarn" >> /etc/pam.conf useradd public smbadm enable-user public zfs set share=name=fs1,path=/rpool/fs1,prot=smb rpool/fs1 zfs set sharesmb=on rpool/fs1 passwd -r files public Step 8 failes: It is not possible to enter a password, output is: solaris passwd -r files public Please try again Please try again Permission denied If I uncomment the new line in pam.conf, it is possible to change the password. Nevertheless, it is not possible to access the share from Windows 7. The Solaris machine is reachable with ping. Access with another SMB enabled user is denied too.

    Read the article

  • Creepy MySQL error during hard work

    - by Kiewic
    Hi, I have a MySQL database installed on a OpenSuse 11.1 server (it is a Bitnami image). The database works fine, it can stay many days without any error, but when MySQL receives a huge amount of transactions, it dies immediately. The next screen shows the error: Moreover, I don't know how to restart MySQL. I have tried this: /opt/bitnami/mysql/bin/mysqld start But it doesn't work, that gives me the next output: 110209 17:09:01 [ERROR] Fatal error: Please read "Security" section of the manual to find out how to run mysqld as root! 110209 17:09:01 [ERROR] Aborting 110209 17:09:01 [Note] /opt/bitnami/mysql/bin/mysqld.bin: Shutdown complete It doesn't matter which kind of statements are executing, if they are a huge amount, MySQL dies. The MySQL server version is 5.1.30 What can be causing these sudden failures?

    Read the article

  • ORAchk version 2.2.5 is now available for download

    - by Gerry Haskins
    Those awfully nice ORAchk folks have asked me to let you know about their latest release... ORAchk version 2.2.5 is now available for download, new features in 2.2.5: Running checks for multiple databases in parallel Ability to schedule multiple automated runs via ORAchk daemon New "scratch area" for ORAchk temporary files moved from /tmp to a configurable $HOME directory location System health score calculation now ignores skipped checks Checks the health of pluggable databases using OS authentication New report section to report top 10 time consuming checks to be used for optimizing runtime in the future More readable report output for clusterwide checks Includes over 50 new Health Checks for the Oracle Stack Provides a single dashboard to view collections across your entire enterprise using the Collection Manager, now pre-bundled Expands coverage of pre and post upgrade checks to include standalone databases, with new profile options to run only these checks Expands to additional product areas in E-Business Suite of Workflow & Oracle Purchasing and in Enterprise Manager Cloud Control ORAchk has replaced the popular RACcheck tool, extending the coverage based on prioritization of top issues reported by users, to proactively scan for known problems within the area of: Oracle Database Standalone Database Grid Infrastructure & RAC Maximum Availability Architecture (MAA) Validation Upgrade Readiness Validation Golden Gate Enterprise Manager Cloud Control Repository E-Business Suite Oracle Payables (R12 only) Oracle Workflow Oracle Purchasing (R12 only) Oracle Sun Systems Oracle Solaris ORAchk features: Proactively scans for the most impactful problems across the various layers of your stack Streamlines how to investigate and analyze which known issues present a risk to you Executes lightweight checks in your environment, providing immediate results with no configuration data sent to Oracle Local reporting capability showing specific problems and their resolutions Ability to configure email notifications when problems are detected Provides a single dashboard to view collections across your entire enterprise using the Collection Manager ORAchk will expand in the future with high impact checks in existing and additional product areas. If you have particular checks or product areas you would like to see covered, please post suggestions in the ORAchk subspace in My Oracle Support Community. For more details about ORAchk see Document 1268927.2

    Read the article

  • Do large folder sizes slow down IO performance?

    - by Aaron
    We have a Linux server process that writes a few thousand files to a directory, deletes the files, and then writes a few thousand more files to the same directory without deleting the directory. What I'm starting to see is that the process doing the writing is getting slower and slower. My question is this: The directory size of the folder has grown from 4096 to over 200000 as see by this output of ls -l. root@ad57rs0b# ls -l 15000PN5AIA3I6_B total 232 drwxr-xr-x 2 chef chef 233472 May 30 21:35 barcodes On ext3, can these large directory sizes slow down performance? Thanks. Aaron

    Read the article

  • Background process text appears in terminal vim

    - by Jezen Thomas
    First time poster, long time lurker, searched, couldn’t find etc, etc. I’m running vim in tmux, in iTerm2. I’m running a server with Grunt.js, which I have running in the background, out of my way. I start my grunt server in the background like this: grunt server & Grunt also watches a bunch of files, and runs some tasks when any of the watched files have been written to. The problem is, when I am in vim and I write a file, the output from grunt starts rendering in vim! Here are some screenshots to illustrate the problem: Before writing the file: And after writing the file: What have I tried? I’ve tried running a ‘stock’ vim by starting with this: vim -u NONE …But the problem remains. This suggests to me that the problem is not with my .vimrc. Perhaps it’s an issue with iTerm2, I don’t know. Help.

    Read the article

  • TDD with SQL and data manipulation functions

    - by Xophmeister
    While I'm a professional programmer, I've never been formally trained in software engineering. As I'm frequently visiting here and SO, I've noticed a trend for writing unit tests whenever possible and, as my software gets more complex and sophisticated, I see automated testing as a good idea in aiding debugging. However, most of my work involves writing complex SQL and then processing the output in some way. How would you write a test to ensure your SQL was returning the correct data, for example? Then, say if the data wasn't under your control (e.g., that of a 3rd party system), how can you efficiently test your processing routines without having to hand write reams of dummy data? The best solution I can think of is making views of the data that, together, cover most cases. I can then join those views with my SQL to see if it's returning the correct records and manually process the views to see if my functions, etc. are doing what they're supposed to. Still, it seems excessive and flakey; particularly finding data to test against...

    Read the article

  • Can't reach custom C# forms application remotely.

    - by gnucom
    Hello, I'm working in Windows Server 2008. I have a very basic C# forms application (not a service) that is listening on a port, say 56112. When using telnet I can connect from the localhost and send and receive data. For some reason I cannot remotely connect to the application. I know I have a connection because I can telnet to 23 on the remotely fine. I've opened this port on the firewall, created rules in/out in advanced firewall, disabled the firewall completely, and more. Any suggestions would be great! This is the telnet output: Microsoft Telnet> open server.cc 56112 Connecting server.cc...Could not open connection to the host, on port 56112: Connect failed

    Read the article

< Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >