Search Results

Search found 1128 results on 46 pages for 'sees'.

Page 32/46 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • How do I send e-mails with attachments to a Microsoft WebTV user?

    - by Petr 'PePa' Pavel
    my friend uses Microsoft WebTV (e-mail address ends with @webtv.net) and I'd like to send him an e-mail with a picture attached to it. We went through a series of attempts one of which ended up a success, all others a failure. He just can't see my e-mail in his mailbox, when it contains an attachment. E-mails without attachments always go through all right. What seemed to help in the first successful case, was that he added my e-mail address to his address book and my e-mail suddenly showed up. Seemed to have been delivered before but hidden. He kept my address in his address book however, it didn't help with the following trials. He did look into his junk folder, nothing there. I made sure the file name contains no spaces. It's a regular jpeg, named something-like-this.jpg I downsized it to have only about 50k, as I've read somewhere that that's a limit. I actually doubt this piece of information, because I think the successful attempt was larger. webtv.net contains zero information. I watched their video demo for the e-mail client, so I at least know how the user interface looks like. I've never laid my hands on the real thing. I'm an advanced user myself (a programmer) but I can't wrap my mind around this. He on the other hand, is a very technically inexperienced user and because he's half way across the globe, I can't come and look over his shoulder. He doesn't have a computer, afaik there's no way I could see what he sees. Any ideas on how to debug this? Thanks for your time, guys. P.S. I can't tag this "webtv" because such tag doesn't exist yet and my reputation is too low, sorry.

    Read the article

  • KVM virtual machine unable to access internet

    - by peachykeen
    I have KVM set up to run a virtual machine (Windows Home Server 2011 acting as a build agent) on a dedicated server (CentOS 6.3). Recently, I ran updates on the host, and the virtual machine is now unable to connect to the internet. The virtual network is running through NAT, the host has an interface (eth0:0) set up with a static IP (virt-manager shows the network and its IP correctly), and all connections to that IP should be sent to the guest. The host and guest can ping one another, but the guest cannot ping anything above the host, nor can I ping the guest from anywhere else (I can ping the host). Results from the guest to another server under my control and from an external system to the guest both return "Destination port unreachable". Running tcpdump on the host and destination shows the host replying to the ping, but the destination never sees it (it doesn't even look like the host is bothering to send it on at all, which leads me to suspect iptables). The ping output matches that, listing replies from 192.168.100.1. The guest can resolve DNS, however, which I find rather odd. The guest's network settings (connection TCP/IPv4 properties) are set up with a static local IP (192.168.100.128), mask of 255.255.255.0, and gateway and DNS at 192.168.100.1. When originally setting up the vm/net, I had set up some iptables rules to enable bridging, but after my hosting company complained about the bridge, I set up a new virtual net using NAT and believe I removed all the rules. The VM's network was working perfectly fine for the last few months, until yesterday. I haven't heard anything from the hosting company, didn't change anything on the guest, so as far as I know, nothing else has changed (unfortunately the list of packages updated has since fallen off scrollback and I didn't note it down).

    Read the article

  • Can Dovecot IMAP automatically create Maildir folders for new (virtual) users?

    - by user233441
    everyone. I am learning to set up a dovecot home IMAP server using a virtual Ubuntu 12.04 machine. My intention is eventually to have a home server that uses POP3 to take email from several addresses and remove them from my ISP's servers, while making them accessible through a home IMAP server (this is similar to the setup described at https://help.ubuntu.com/community/POP3Aggregator, which explains how to set up the system with dovecot version 1, and is thus outdated). I intend to use the ISP's server directly when sending messages, and to BCC all sent messages to myself. I've completed the basic set up of the test server: getmail uses POP3 to fetch messages from two test email accounts, and successfully delivers them to the respective Maildir-style new folders on the virtual machine. Dovecot then successfully sees these messages. I have two questions: 1) I had to set up new, cur, and tmp folders for both of the test accounts manually to get this setup to work. Is there a way to get dovecot to create these Maildir folders automatically when I create a new virtual user account (e.g., when I add a user and password combination to my dovecot password file), or is it expected that I write a bash script to automate that task? 2) I would welcome any comments you have on how this approach could be improved as I learn to set it up. My motivations with this approach are 1) to enable archiving/storing emails from several hosting providers that impose a cap on server storage, and 2) to give me somewhat greater control over email storage without requiring that I set up and administrate a mail server from scratch (which I'm not yet prepared to do) (this follows the recommendations at https://ssd.eff.org/tech/email). Thank you!

    Read the article

  • How can I make my Super keys (Windows Key) behave more like Ctrl/Alt/Shift in Linux

    - by deltaray
    After using the Ctrl + "arrow keys" for 13 years to switch virtual desktops in X windows, I've been convinced recently to change to using the Super keys instead (the windows key and the context menu key, which I've remapped). This all works fine for the most part. However, something is still picking up the key events that these keys are sending as if they are a normal alphanumeric like key. For example, I first noticed this in Google Docs spreadsheet that if I press the windows key alone over top of a cell, that it starts editing that cell. It doesn't insert anything, it just sends a key event that Firefox sees and starts editing the cell. This caused problems on a collaborative document I was working on as the way Google docs works, it led to me accidentally erasing the data in a few fields before I realised what was going on. I like using the super keys, but I want them to behave more like a Ctrl or Alt key does in that its a modifier key and doesn't send anything until a second key is pressed. My setup is the following: Ubuntu 10.10 XFCE 4 Microsoft Natural Ergo 4000 keyboard (with the logo scratched out) The following is my .Xmodmap file: remove Lock = Caps_Lock keycode 66 = Escape ! The below maps my other windows context menu key. keycode 135 = Super_R Edit: As requested, here is the relevant output from xev for a keypress and keyrelease of my Super_L (left windows key) KeyPress event, serial 34, synthetic NO, window 0x8200001, root 0x15d, subw 0x0, time 2428849342, (177,174), root:(182,228), state 0x10, keycode 133 (keysym 0xffeb, Super_L), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 34, synthetic NO, window 0x8200001, root 0x15d, subw 0x0, time 2428849430, (177,174), root:(182,228), state 0x50, keycode 133 (keysym 0xffeb, Super_L), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False

    Read the article

  • Can I disable the message line when launching ``screen -RR``

    - by Jimm Chen
    screen -RR is great. It does one of the two thing automatically: If there is any detached screen session, it picks up one can attach to it. If there is no detached screen session(no session yet, or all have been attach to other terminal), it creates a new screen session automatically. I use Windows server Remote Desktop a lot, screen -RR behaves almost the same when a client connects to a remote desktop server. It is natural and I like it. However, when screen -RR determines it should create a new session, it displays a message line at terminal bottom for 5 second. I'd like to suppress this message line because it brings us little benefit. In my opinion, a remote user can always easily distinguish whether he is connected to a resumed session(a piled-up display) or a newly created session(a clean display) from what he sees in the terminal window. So, is there a way to suppress the nag "New screen..." ? Just suppress that very one, not suppress message line globally. My env: opensuse 11.3, GNU screen 4.00.03 (FAU) 23-Oct-06

    Read the article

  • iPhone 3G can't see any WiFi networks, ever

    - by torbengb
    I've got WiFi turned on in the settings, and ask before connect turned off. My iPhone 3G should see several WiFi networks, but it lists none. It also does not list my own network, which my computers see just fine. It has worked earlier but stopped working recently (possibly because of or at the same time of other trouble (which a restore solved)). The iPhone is not jailbroken. The SSID is not hidden, uses WPA2. It also finds no WiFi networks when I'm at a friend's house. His iPhone 2G sees several WiFi networks, including his own. When I use the manual entry method, specifying my home SSID and the proper WPA2 passkey, then click join, the iPhone says couldn't find that network. Same at my friend's place, with his SSID and passkey. I've just backed up my iPhone, then restored it, to see if refreshing the firmware would help. It didn't change anything. Is my iPhone broken? How can I fix this?

    Read the article

  • Windows Server wbadmin recover with commas

    - by dlp
    I want to do a recovery of files with commas in their names from the command line, ala: wbadmin start recovery -version:10/01/2013-12:00 -itemType:File -overwite:Overwrite -quiet "-Items:C:\Path\To\File, With Comma.txt,C:\Path\To\File 2, With Comma.txt" So there are two files: C:\Path\To\File, With Comma.txt C:\Path\To\File 2, With Comma.txt The problem is wbadmin assumes commas separates each file, so it sees 4 files specified instead of 2. I've tried putting a \ in front of commas that are part of the file names like so: wbadmin start recovery -version:10/01/2013-12:00 -itemType:File -overwite:Overwrite -quiet "-Items:C:\Path\To\File\, With Comma.txt,C:\Path\To\File 2\, With Comma.txt" but it doesn't work, it just says there's a syntax error. The documentation on Technet doesn't seem to mention anything that'll help either. OS is Windows Server 2008 R2. A clarifying comment: I've changed the file names to be different than the actual names to be less revealing, but I also see I dumbed it down too much. The comma can occur either in the file name itself like C:\Path\To\File, With Comma.txt' or in the path to the file, like:C:\Path, To\Other\File.txt`.

    Read the article

  • Linux iptables / conntrack performance issue

    - by tim
    I have a test-setup in the lab with 4 machines: 2 old P4 machines (t1, t2) 1 Xeon 5420 DP 2.5 GHz 8 GB RAM (t3) Intel e1000 1 Xeon 5420 DP 2.5 GHz 8 GB RAM (t4) Intel e1000 to test linux firewall performance since we got bitten by a number of syn-flood attacks in the last months. All machines run Ubuntu 12.04 64bit. t1, t2, t3 are interconnected through an 1GB/s switch, t4 is connected to t3 via an extra interface. So t3 simulates the firewall, t4 is the target, t1,t2 play the attackers generating a packetstorm thorugh (192.168.4.199 is t4): hping3 -I eth1 --rand-source --syn --flood 192.168.4.199 -p 80 t4 drops all incoming packets to avoid confusion with gateways, performance issues of t4 etc. I watch the packet stats in iptraf. I have configured the firewall (t3) as follows: stock 3.2.0-31-generic #50-Ubuntu SMP kernel rhash_entries=33554432 as kernel parameter sysctl as follows: net.ipv4.ip_forward = 1 net.ipv4.route.gc_elasticity = 2 net.ipv4.route.gc_timeout = 1 net.ipv4.route.gc_interval = 5 net.ipv4.route.gc_min_interval_ms = 500 net.ipv4.route.gc_thresh = 2000000 net.ipv4.route.max_size = 20000000 (I have tweaked a lot to keep t3 running when t1+t2 are sending as many packets as possible). The result of this efforts are somewhat odd: t1+t2 manage to send each about 200k packets/s. t4 in the best case sees aroung 200k in total so half of the packets are lost. t3 is nearly unusable on console though packets are flowing through it (high numbers of soft-irqs) the route cache garbage collector is no way near to being predictable and in the default setting overwhelmed by very few packets/s (<50k packets/s) activating stateful iptables rules makes the packet rate arriving on t4 drop to around 100k packets/s, efectively losing more than 75% of the packets And this - here is my main concern - with two old P4 machines sending as many packets as they can - which means nearly everyone on the net should be capable of this. So here goes my question: Did I overlook some importand point in the config or in my test setup? Are there any alternatives for building firewall system especially on smp systems?

    Read the article

  • How to fix Truecrypt MBR using Command Prompt or Linux live USB?

    - by Michal Stefanow
    I was playing with TrueCrypt and decided to make a fresh installation of Windows 7 from USB stick. Unfortunately Windows 7 installer: "setup was unable to create a new system partition" My entire HDD has been formatted and is visible as 320GB unallocated space, but no fdisk nor Windows 7 installer nor Windows XP installer could help. (Windows XP doesn't even see the HDD, it sees only USB stick and says "not enough space to install") It may be related to Truecrypt and pre-boot authentification, boot loader and/or MBR. As I don't have optical device I could not have created rescue disk. Right now I need a rescue of some kind, supposingly by erasing/fixing MBR using Linux live USB or using Command Promt. Another approach is to click "repair your comuter" menu from Windows 7 installer then click "restore your computer", then click OK upon error and get access to Command Prompt. Yet another another approach is to start computer without Linux USB I receive this: error:unknown filesystem. grub rescue> Any help would be greatly appreciated as my laptop is kind of not fully operational now. UPDATE: This was asked long time ago, ended up formatting everything (eventually it worked using different bootable USB)...

    Read the article

  • haproxy + nginx: https trailing slashes redirected to http

    - by user1719907
    I have a setup where HTTP(S) traffic goes from HAProxy to nginx. HAProxy nginx HTTP -----> :80 ----> :9080 HTTPS ----> :443 ----> :9443 I'm having troubles with implicit redirects caused by trailing slashes going from https to http, like this: $ curl -k -I https://www.example.com/subdir HTTP/1.1 301 Moved Permanently Server: nginx/1.2.4 Date: Thu, 04 Oct 2012 12:52:39 GMT Content-Type: text/html Content-Length: 184 Location: http://www.example.com/subdir/ The reason obviously is HAProxy working as SSL unwrapper, and nginx sees only http requests. I've tried setting up the X-Forwarded-Proto to https on HAProxy config, but it does nothing. My nginx setup is as follows: server { listen 127.0.0.1:9443; server_name www.example.com; port_in_redirect off; root /var/www/example; index index.html index.htm; } And the relevant parts from HAProxy config: frontend https-in bind *:443 ssl crt /etc/example.pem prefer-server-ciphers default_backend nginxssl backend nginxssl balance roundrobin option forwardfor reqadd X-Forwarded-Proto:\ https server nginxssl1 127.0.0.1:9443

    Read the article

  • Removing HttpModule for specific path in ASP.NET / IIS 7 application?

    - by soccerdad
    Most succinctly, my question is whether an ASP.NET 4.0 app running under IIS 7 integrated mode should be able to honor this portion of my Web.config file: <location path="auth/windows"> <system.webServer> <modules> <remove name="FormsAuthentication"/> </modules> </system.webServer> </location> I'm experimenting with mixed mode authentication (Windows and Forms). Using IIS Manager, I've disabled Anonymous authentication to auth/windows/winauth.aspx, which is within the location path above. I have Failed Request Tracing set up to trace various HTTP status codes, including 302s. When I request the winauth.aspx page, a 302 HTTP status code is returned. If I look at the request trace, I can see that a 401 (unauthorized) was originally generated by the AnonymousAuthenticationModule. However, the FormsAuthenticationModule converts that to a 302, which is what the browser sees. So it seems as though my attempt to remove that module from the pipeline for pages in that path isn't working. But I'm not seeing any complaints anywhere (event viewer, yellow pages of death, etc.) that would indicate it's an invalid configuration. I want the 401 returned to the browser, which presumably would include an appropriate WWW-Authenticate header. A few other points: a) I do have <authentication mode="Forms"> in my Web.config, and that is what the 302 redirects to; b) I got the "name" of the module I'm trying to remove from the inetserv\config\applicationHost.config file; c) I have this element in my Web.config file: <modules runAllManagedModulesForAllRequests="false">; d) I tried a <location> element for the path in which I set the authentication mode to "None", but that gave a yellow exception page that the property can't be set below the application level. Anyone had any luck removing modules in this fashion?

    Read the article

  • Print job leaves queue but document isn't printed

    - by midnightstar
    I'm dealing with an HP Deskjet F380 All-in-One printer. It's connected via USB to a desktop running Windows 7 Enterprise x64. If I attempt to print something like a web page or a word document, the print job will show up in the print queue and the printer would stir. By stir, I mean, it would seem to prepare itself to print. However, the print job would then leave queue (I'm thinking the computer sees it as completed) and the printer would never actually print anything. However I went into Printers and Devices under the Windows start menu, into printer properties, and print a test page, the test page would print out successfully. I attempted to uninstall and re-install the printer drivers for the printer, but the printer would continue the same behavior afterwards. I also connected the printer to another computer and the printer will print just about anything. I also checked to make sure that the computer the printer needs to be connected to was update to date as far as the OS. The machine is fully up to date. I played with the way the computer handles printer spooling. Under the printer properties, under the "Advanced" tab, I had the print job print directly to the printer. In all these instances, the same behavior continues. I've restarted the printer spooling service. I've also gone under C:\Windows\System32\spool\PRINTERS and deleted files that were sitting in the folder. I have ran SFC /scannnow and the system found no errors in the system's integrity. I had the computer and printer make a cold reboot individually. The only lead I really have going for me is that since the printer prints on other PCs, I can only assume that there is something wrong with the way the PC is configured.

    Read the article

  • Copy a harddrive from a failed desktop machine using a second working one.

    - by MrEyes
    Heres the scenario: I have PC-A, an old PC that runs Windows XP but now refuses to boot due to a failed motherboard (or maybe PSU). This PC has a single 80gb IDE drive. I also have PC-B, running Windows Vista, this is working fine. I want to copy all the data off PC-As HDD onto PC-B. To do this I have taken the HDD out of PC-A and connected it as a slave to PC-B. PC-B now boots and sees the additional drive. However, when I attempt to access/copy user folders (i.e. Documents and Settings/[username]/*) I am told that I cannot access the folders due to user permissions. I am doing this under an adminstrator account on PC-B. So the question is, how can I "backup" the data? Preferably without making any changes to the drive contents. The reason for this is that it is possible that PC-A is failing due to a bad PSU, so I intend to replace it before writing off the machine. However I would feel much happier if I had a backup of the data on the HDD.

    Read the article

  • I (stupidly) converted a TrueCrypt encrypted disk to GPT in Disk Management: now TrueCrypt won't mount it

    - by asilentfire
    Backstory: After moving a Macrium Reflect disk image from my TrueCrypt external drive (with whole disk encryption) onto a unencrypted drive and using Windows PE with Macrium Reflect to restore my internal disk to the recovery image on the external unencrypted drive, my Windows 8 failed to boot. I then went back and also recovered the System Partition (looking now, it is currently EFI), but I still couldn't boot into my backup.. I was in a hurry to get online for something so I just did a clean install of Windows 8, without the backup.. After I installed Windows 8, I went into Disk Management out of curiosity to see if there were other partitions with Windows 8 that Macruim might have missed, and there is (by default) a Recovery Partition of 100MB. My memory of this is hazy, as I was trying to get up and running for an exam at 4 AM: Something in Disk Management prompted me to convert my encrypted external drive to GPT.. I have no idea why I did this, but I went ahead and allowed it to convert my TrueCrypt drive to GPT. Now, I can't mount the drive in TrueCrypt.. Disk Management sees it as Disk 1, Basic, and Unallocated. I tried converting it back to MBR with Disk Management, but no dice with TrueCrypt :( If I try to mount the disk in TrueCrypt I get the message: Incorrect password or not a TrueCrypt volume I should never have messed with a Truecrypt drive in Disk Management, but I did. I have important college work in that drive, and fear I have lost it forever. PLEASE HELP

    Read the article

  • Messages don’t always appear in Mail.app

    - by MikeHoss
    I asked this question on serverfault, but it was correctly suggested that superuser is better -- and I agree: My wife and I share a Mac and use different accounts. We both use Apple's standard Mail.app. We can also get to our email accounts via SquirrelMail that our webhost provides. Both SquirrelMail and Mail.app are connecting via IMAP. My wife was the first to notice that not all messages were getting to the Mail.app. She would check the Mac (our main machine) and then a little while later check mail from another machine via SquirrelMail and see messages there that should have been on the Mac. She would go back and those messages would never show up. Lately, I have been seeing the same thing, though less often. I can't reproduce it, or just look at a message to see if they haven't been moved over. I've looked in Junk, etc. and the Mac simply never sees those messages via IMAP. Does anyone have a guess to something I could poke around at?

    Read the article

  • Win7 loses connection to network shares after resume unless server specified using FQDN

    - by Szonja Zemkó
    My Win7 client has a connection to a Linux server and its shared folders. The problem occurs when the computer wakes up after a sleep and then one of the shared folders is not accessible. I receive the following message: Error code: 80070035, The network path was not found. I have problem with one specific folder only. When I restart the computer this problematic folder is accessible again. When I log off before sleep the folder is accessible after wakeup. If I try to access the folder by using the FQDN of the server or the server IP it is also accessible. As a temporary solution I mapped the folder to a network drive using the FQDN and it's working fine but it's inconvenient since every other folder is accessible on the server. To summarize: \server\problematicshare no longer works after resume (the Samba server sees my client connect, then disconnects a few seconds later while I receive the above error message) \server\othershare works after resume \fqdn.of.server\problematicshare always works \ip.of.server\problematicshare always works once the problem manifests, I'm no longer able to restart the "Workstation" service (it is not responding) restarting the "Computer Browser" service has no apparent effect the event log doesn't contain anything that seems relevant "ping server" works

    Read the article

  • Windows 8 Killed my SSDs

    - by SLaks
    I have a computer running 2 256GB Crucial M4's (CT256M4SSD2) in a RAID 0 (striped) array on an ASUS P9X79 Pro using Intel's (built-in) RAID system. I recently installed Windows 8 Pro as UEFI on this RAID array. (wiping a fully-function Windows 7 non-UEFI installation) Now, whenever the computer is left running for about an hour, the system no longer sees those drives. Since those drives contain Windows, this leads to various forms of BSODs. If I Intel RSTe (RAID manager) is running at the time, it will say that the disk backing that RAID array has been removed. Once this happens, if I reset the computer, it will no longer boot. Entering BIOS setup shows that the SATA 3 (6Gbps) ports that those disks are connected to are both empty. If I then power down the system completely, then turn it on again, the drives reappear, but the problem repeats after another hour or so. I have inconclusively determined that the problem occurs even if Windows is not running (booted into the installation environment from a UEFI flash drive) I don't think there has been any data corruption since this started happening, although I have had two strange issues with a GIT repo on that disk. sfc /ScanNow and Intel's disk check (in RSTe) both do not find anything. Does anyone know what might cause this?

    Read the article

  • “Query cost (relative to the batch)” <> Query cost relative to batch

    - by Dave Ballantyne
    OK, so that is quite a contradictory title, but unfortunately it is true that a common misconception is that the query with the highest percentage relative to batch is the worst performing.  Simply put, it is a lie, or more accurately we dont understand what these figures mean. Consider the two below simple queries: SELECT * FROM Person.BusinessEntity JOIN Person.BusinessEntityAddress ON Person.BusinessEntity.BusinessEntityID = Person.BusinessEntityAddress.BusinessEntityID go SELECT * FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID After executing these and looking at the plans, I see this : So, a 13% / 87% split ,  but 13% / 87% of WHAT ? CPU ? Duration ? Reads ? Writes ? or some magical weighted algorithm ?  In a Profiler trace of the two we can find the metrics we are interested in. CPU and duration are well out but what about reads (210 and 1935)? To save you doing the maths, though you are more than welcome to, that’s a 90.2% / 9.8% split.  Close, but no cigar. Lets try a different tact.  Looking at the execution plan the “Estimated Subtree cost” of query 1 is 0.29449 and query 2 its 1.96596.  Again to save you the maths that works out to 13.03% and 86.97%, round those and thats the figures we are after.  But, what is the worrying word there ? “Estimated”.  So these are not “actual”  execution costs,  but what’s the problem in comparing the estimated costs to derive a meaning of “Most Costly”.  Well, in the case of simple queries such as the above , probably not a lot.  In more complicated queries , a fair bit. By modifying the second query to also show the total number of lines on each order SELECT *,COUNT(*) OVER (PARTITION BY Sales.SalesOrderDetail.SalesOrderID) FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID The split in percentages is now 6% / 94% and the profiler metrics are : Even more of a discrepancy. Estimates can be out with actuals for a whole host of reasons,  scalar UDF’s are a particular bug bear of mine and in-fact the cost of a udf call is entirely hidden inside the execution plan.  It always estimates to 0 (well, a very small number). Take for instance the following udf Create Function dbo.udfSumSalesForCustomer(@CustomerId integer) returns money as begin Declare @Sum money Select @Sum= SUM(SalesOrderHeader.TotalDue) from Sales.SalesOrderHeader where CustomerID = @CustomerId return @Sum end If we have two statements , one that fires the udf and another that doesn't: Select CustomerID from Sales.Customer order by CustomerID go Select CustomerID,dbo.udfSumSalesForCustomer(Customer.CustomerID) from Sales.Customer order by CustomerID The costs relative to batch is a 50/50 split, but the has to be an actual cost of firing the udf. Indeed profiler shows us : No where even remotely near 50/50!!!! Moving forward to window framing functionality in SQL Server 2012 the optimizer sees ROWS and RANGE ( see here for their functional differences) as the same ‘cost’ too SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid RANGE unbounded preceding) from Sales.SalesOrderdetail go SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid Rows unbounded preceding) from Sales.SalesOrderdetail By now it wont be a great display to show you the Profiler trace reads a *tiny* bit different. So moral of the story, Percentage relative to batch can give a rough ‘finger in the air’ measurement, but dont rely on it as fact.

    Read the article

  • No more internet connection after update in 14.04 with Intel Dual Band Wireless AC 7260

    - by luis
    My Dell XPS 15 (haswell) was working fine until I stupidly accepted recently to apply Ubuntu updates. Since then, my wifi does not work (it shows "device not managed" when clicking wifi icon in toolbar). Even USB to Ethernet adapter does not seem to work. Bluetooth at least "sees" other bluetooth devices around... See below output from dmesg (dmesg |grep iwl) : [ 886.462459] iwlwifi 0000:06:00.0: irq 51 for MSI/MSI-X [ 886.462561] iwlwifi 0000:06:00.0: Direct firmware load failed with error -2 [ 886.462562] iwlwifi 0000:06:00.0: Falling back to user helper [ 886.463284] iwlwifi 0000:06:00.0: loaded firmware version 22.1.7.0 op_mode iwlmvm [ 886.475345] iwlwifi 0000:06:00.0: Detected Intel(R) Dual Band Wireless AC 7260, REV=0x144 [ 886.475433] iwlwifi 0000:06:00.0: L1 Enabled; Disabling L0S [ 886.475684] iwlwifi 0000:06:00.0: L1 Enabled; Disabling L0S [ 886.689214] ieee80211 phy0: Selected rate control algorithm 'iwl-mvm-rs' Below the output from modinfo iwlwifi: filename: /lib/modules/3.13.0-29- generic/kernel/drivers/net/wireless/iwlwifi/iwlwifi.ko license: GPL author: Copyright(c) 2003-2013 Intel Corporation <[email protected]> version: in-tree: description: Intel(R) Wireless WiFi driver for Linux firmware: iwlwifi-100-5.ucode firmware: iwlwifi-1000-5.ucode firmware: iwlwifi-135-6.ucode firmware: iwlwifi-105-6.ucode firmware: iwlwifi-2030-6.ucode firmware: iwlwifi-2000-6.ucode firmware: iwlwifi-5150-2.ucode firmware: iwlwifi-5000-5.ucode firmware: iwlwifi-6000g2b-6.ucode firmware: iwlwifi-6000g2a-5.ucode firmware: iwlwifi-6050-5.ucode firmware: iwlwifi-6000-4.ucode firmware: iwlwifi-3160-7.ucode firmware: iwlwifi-7260-7.ucode srcversion: 1E6912E109D5A43B310FB34 alias: pci:v00008086d0000095Asv*sd00005490bc*sc*i* (a pack of lines of kind "alias: pci:xxxxx...." that I guess are not helpful) alias: pci:v00008086d0000095Bsv*sd00005290bc*sc*i* depends: cfg80211 intree: Y vermagic: 3.13.0-29-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: 66:02:CB:36:F1:31:3B:EA:01:C4:BD:A9:65:67:CF:A7:23:C9:70:D8 sig_hashalgo: sha512 parm: swcrypto:using crypto in software (default 0 [hardware]) (int) parm: 11n_disable:disable 11n functionality, bitmap: 1: full, 2: disable agg TX, 4: disable agg RX, 8 enable agg TX (uint) parm: amsdu_size_8K:enable 8K amsdu size (default 0) (int) parm: fw_restart:restart firmware in case of error (default true) (bool) parm: antenna_coupling:specify antenna coupling in dB (defualt: 0 dB) (int) parm: wd_disable:Disable stuck queue watchdog timer 0=system default, 1=disable, 2=enable (default: 0) (int) parm: nvm_file:NVM file name (charp) parm: bt_coex_active:enable wifi/bt co-exist (default: enable) (bool) parm: led_mode:0=system default, 1=On(RF On)/Off(RF Off), 2=blinking, 3=Off (default: 0) (int) parm: power_save:enable WiFi power management (default: disable) (bool) parm: power_level:default power save level (range from 1 - 5, default: 1) (int) I downloaded the latest versions of iwlwifi firmware from git (git clone git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git; copy iwlwifi-3160-9.ucode iwlwifi-7260-9.ucode iwlwifi-7265-9.ucode to /lib/firmware and reboot) but as you can imagine it did not help. Update #1: Downloaded from http://wireless.kernel.org/en/users/Drivers/iwlwifi?action=AttachFile&do=get&target=iwlwifi-7260-ucode-22.15.8.0.tgz and copied the file into /lib/firmware. After reloading it with modprobe, it seems to be OK: [ 14.761283] iwlwifi 0000:06:00.0: enabling device (0000 -> 0002) [ 14.761472] iwlwifi 0000:06:00.0: irq 51 for MSI/MSI-X [ 14.772478] iwlwifi 0000:06:00.0: loaded firmware version 22.15.8.0 op_mode iwlmvm [ 14.800274] iwlwifi 0000:06:00.0: Detected Intel(R) Dual Band Wireless AC 7260, REV=0x144 [ 14.800349] iwlwifi 0000:06:00.0: L1 Enabled; Disabling L0S [ 14.800657] iwlwifi 0000:06:00.0: L1 Enabled; Disabling L0S [ 15.007048] ieee80211 phy0: Selected rate control algorithm 'iwl-mvm-rs' However, clicking in wifi in the toolbar still shows "device not managed". Any clues? Many thanks! Luis

    Read the article

  • Silverlight Cream for February 22, 2011 -- #1050

    - by Dave Campbell
    In this Issue: Robby Ingebretsen, Victor Gaudioso, Andrea Boschin(-2-), Rudi Grobler(-2-), Michael Crump, Deborah Kurata, Dennis Delimarsky, Pete Vickers, Yochay Kiriaty, Peter Kuhn, WindowsPhoneGeek, and Jesse Liberty(-2-). Above the Fold: Silverlight: "Silverlight Simple MVVM Printing" Deborah Kurata WP7: "Creating theme friendly UI in WP7 using OpacityMask" WindowsPhoneGeek Tools: "KAXAML v1.8" Robby Ingebretsen Shoutouts: Peter Foot posted Silverlight for Windows Phone Toolkit–Feb 2011 Rudi Grobler posts his top requested features for WP7, Silverlight, and WCF: vNext ... see you in Seattle, Rudi! From SilverlightCream.com: KAXAML v1.8 Robby Ingebretsen just posted KAXML v1.8 that now supports .NET 4.0, WPF, and Silferlight4 ... go grab it. Learn how to use Blend to create a Data Store, Add Properties to it, etc... Victor Gaudioso has 3 new Silverlight and/or Expression Blend video tutorials up... first is this one on Creating a Data store, adding properties to it, oh... read the title :), Next up is: Send async messages across UserControls or even applications, followed by the latest: Create a Sketchflow Animation using the Sketchflow Animation Panel A base class for threaded Application Services Andrea Boschin continues his IApplicationServices series with this one on a base class he created to develop Application Services that runs a thread. Windows Phone 7 - Part #6: Taking advantage of the phone Andrea Boschin also has part 6 of his series at SilverlightShow on WP7... this one is covering a bunch of items... Capabilities, Launchers/Choosers, and gestures... plus the source for a fun game. {homebrew} Skype for WP7 Rudi Grobler posted about the availability of (some features of) Skype for WP7 being available. The XDA guys have working contacts and the ability to chat going, plus they're looking for poeple to join in... Follow Rudi's link, and let them know you're up for it! Simple menu for your WP7 application Rudi Grobler has another post up about a very simple menu control for WP7 that he produced that is also very easy to use. Attaching a Command to the WP7 Application Bar Michael Crump shows how to bind the application bar to a Relay Command with the use of MVVMLight in 7 Easy Steps :) Silverlight Simple MVVM Printing Deborah Kurata continues her MVVM series with this one on printing what your user sees on the page... but doing so within the MVVM pattern. Enhancing the general Zune experience on Windows Phone 7 with Zune web API Dennis Delimarsky apparently likes the Zune as much as I do, and has ratted out tons of information about the Zune API for use in WP7 apps... and lots of code... Validating input forms in Windows Phone 7 Pete Vickers takes a great detailed spin through validation on the WP7... the rules have changed, but Pete explains with some code examples. Windows Phone Shake Gestures Library Yochay Kiriaty discusses Shake Gestures for the WP7 device and then describes the "Windows Phone Shake Gesture Library" that detects shake gestures in 3D space... and after a great description has the link for downloading. What difference does a sprite sheet make? Peter Kuhn is writing a series at SilverlightShow on XNA for Silverlight Devs that I've highlighted. An outshoot of that is this discussion of the use of sprite sheets and game development. Creating theme friendly UI in WP7 using OpacityMask WindowsPhoneGeek has a new post up today on using Opacity Masks in WP7 to enable using one set of icons for either the dark or light theme.. too cool, you'll wanna check this out! Linq to XML Jesse Liberty continues with Linq with regard to WP7 with this post on Linq to XML... and why XML? crap... I was just saving/loading XML today! :) Lambda–Not as weird as it sounds Jesse Liberty then jumps into Lambda expressions... maybe it's a chance for me to learn WTF the lambdas really do that I use all the time! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Know Your Audience, And/Or Your Customer

    - by steve.diamond
    Yesterday I gave an internal presentation to about 20 Oracle employees on "messaging," not messaging technology, but embarking on the process of building messages. One of the elements I covered was the importance of really knowing and understanding your audience. As a humorous reference I included two side-by-side photos of Oakland A's fans and Oakland Raiders fans. The Oakland A's fans looked like happy-go-lucky drunk types. The Oakland Raiders fans looked like angry extras from a low budget horror flick. I then asked my presentation attendees what these two groups had in common. Here's what I heard. --They're human (at least I THINK they're human). --They're from Oakland. --They're sports fans. After that, it was anyone's guess. A few days earlier we were putting the finishing touches on a sales presentation for one of our product lines. We had included an upfront "lead in" addressing how the economy is improving, yet that doesn't mean sales executives will have any more resources to add to their teams, invest in technology, etc. This "lead in" included miscellaneous news article headlines and statistics validating the slowly improving economy. When we subjected this presentation to internal review two days ago, this upfront section in particular was scrutinized. "Is the economy really getting better? I (exclamation point) don't think it's really getting better. Haven't you seen the headlines coming out of Greece and Europe?" Then the question TO ME became, "Who will actually be in the audience that sees and hears this presentation? Will s/he be someone like me? Or will s/he be someone like the critic who didn't like our lead-in?" We took the safe route and removed that lead in. After all, why start a "pitch" with a component that is arguably subjective? What if many of our audience members are individuals at organizations still facing a strong headwind? For reasons I won't go into here, it was the right decision to make. The moral of the story: Make sure you really know your audience. Harness the wisdom of the information your organization's CRM systems collect to get that fully informed "customer view." Conduct formal research. Conduct INFORMAL research. Ask lots of questions. Study industries and scenarios that have nothing to do with yours to see "how they do it." Stop strangers in coffee shops and on the street...seriously. Last week I caught up with an old friend from high school who recently retired from a 25 year career with the USMC. He said, "I can learn something from every single person I come into contact with." What a great way of approaching the world. Then, think about and write down what YOU like and dislike as a customer. But also remember that when it comes to your company's products, you are most likely NOT the customer, so don't go overboard in superimposing your own world view. Approaching the study of customers this way adds rhyme, reason and CONTEXT to lengthy blog posts like this one. Know your audience.

    Read the article

  • Reconciling the Boy Scout Rule and Opportunistic Refactoring with code reviews

    - by t0x1n
    I am a great believer in the Boy Scout Rule: Always check a module in cleaner than when you checked it out." No matter who the original author was, what if we always made some effort, no matter how small, to improve the module. What would be the result? I think if we all followed that simple rule, we'd see the end of the relentless deterioration of our software systems. Instead, our systems would gradually get better and better as they evolved. We'd also see teams caring for the system as a whole, rather than just individuals caring for their own small little part. I am also a great believer in the related idea of Opportunistic Refactoring: Although there are places for some scheduled refactoring efforts, I prefer to encourage refactoring as an opportunistic activity, done whenever and wherever code needs to cleaned up - by whoever. What this means is that at any time someone sees some code that isn't as clear as it should be, they should take the opportunity to fix it right there and then - or at least within a few minutes Particularly note the following excerpt from the refactoring article: I'm wary of any development practices that cause friction for opportunistic refactoring ... My sense is that most teams don't do enough refactoring, so it's important to pay attention to anything that is discouraging people from doing it. To help flush this out be aware of any time you feel discouraged from doing a small refactoring, one that you're sure will only take a minute or two. Any such barrier is a smell that should prompt a conversation. So make a note of the discouragement and bring it up with the team. At the very least it should be discussed during your next retrospective. Where I work, there is one development practice that causes heavy friction - Code Review (CR). Whenever I change anything that's not in the scope of my "assignment" I'm being rebuked by my reviewers that I'm making the change harder to review. This is especially true when refactoring is involved, since it makes "line by line" diff comparison difficult. This approach is the standard here, which means opportunistic refactoring is seldom done, and only "planned" refactoring (which is usually too little, too late) takes place, if at all. I claim that the benefits are worth it, and that 3 reviewers will work a little harder (to actually understand the code before and after, rather than look at the narrow scope of which lines changed - the review itself would be better due to that alone) so that the next 100 developers reading and maintaining the code will benefit. When I present this argument my reviewers, they say they have no problem with my refactoring, as long as it's not in the same CR. However I claim this is a myth: (1) Most of the times you only realize what and how you want to refactor when you're in the midst of your assignment. As Martin Fowler puts it: As you add the functionality, you realize that some code you're adding contains some duplication with some existing code, so you need to refactor the existing code to clean things up... You may get something working, but realize that it would be better if the interaction with existing classes was changed. Take that opportunity to do that before you consider yourself done. (2) Nobody is going to look favorably at you releasing "refactoring" CRs you were not supposed to do. A CR has a certain overhead and your manager doesn't want you to "waste your time" on refactoring. When it's bundled with the change you're supposed to do, this issue is minimized. The issue is exacerbated by Resharper, as each new file I add to the change (and I can't know in advance exactly which files would end up changed) is usually littered with errors and suggestions - most of which are spot on and totally deserve fixing. The end result is that I see horrible code, and I just leave it there. Ironically, I feel that fixing such code not only will not improve my standings, but actually lower them and paint me as the "unfocused" guy who wastes time fixing things nobody cares about instead of doing his job. I feel bad about it because I truly despise bad code and can't stand watching it, let alone call it from my methods! Any thoughts on how I can remedy this situation ?

    Read the article

  • concurrency::accelerator_view

    - by Daniel Moth
    Overview We saw previously that accelerator represents a target for our C++ AMP computation or memory allocation and that there is a notion of a default accelerator. We ended that post by introducing how one can obtain accelerator_view objects from an accelerator object through the accelerator class's default_view property and the create_view method. The accelerator_view objects can be thought of as handles to an accelerator. You can also construct an accelerator_view given another accelerator_view (through the copy constructor or the assignment operator overload). Speaking of operator overloading, you can also compare (for equality and inequality) two accelerator_view objects between them to determine if they refer to the same underlying accelerator. We'll see later that when we use concurrency::array objects, the allocation of data takes place on an accelerator at array construction time, so there is a constructor overload that accepts an accelerator_view object. We'll also see later that a new concurrency::parallel_for_each function overload can take an accelerator_view object, so it knows on what target to execute the computation (represented by a lambda that the parallel_for_each also accepts). Beyond normal usage, accelerator_view is a quality of service concept that offers isolation to multiple "consumers" of an accelerator. If in your code you are accessing the accelerator from multiple threads (or, in general, from different parts of your app), then you'll want to create separate accelerator_view objects for each thread. flush, wait, and queuing_mode When you create an accelerator_view via the create_view method of the accelerator, you pass in an option of immediate or deferred, which are the two members of the queuing_mode enum. At any point you can access this value from the queuing_mode property of the accelerator_view. When the queuing_mode value is immediate (which is the default), any commands sent to the device such as kernel invocations and data transfers (e.g. parallel_for_each and copy, as we'll see in future posts), will get submitted as soon as the runtime sees fit (that is the definition of immediate). When the value of queuing_mode is deferred, the commands will be batched up. To send all buffered commands to the device for execution, there is a non-blocking flush method that you can call. If you wish to block until all the commands have been sent, there is a wait method you can call. Deferring is a more advanced scenario aimed at performance gains when you are submitting many device commands and you want to avoid the tiny overhead of flushing/submitting each command separately. Querying information Just like accelerator, accelerator_view exposes the is_debug and version properties. In fact, you can always access the accelerator object from the accelerator property on the accelerator_view class to access the accelerator interface we looked at previously. Interop with D3D (aka DX) In a later post I'll show an example of an app that uses C++ AMP to compute data that is used in pixel shaders. In those scenarios, you can benefit by integrating C++ AMP into your graphics pipeline and one of the building blocks for that is being able to use the same device context from both the compute kernel and the other shaders. You can do that by going from accelerator_view to device context (and vice versa), through part of our interop API in amp.h: *get_device, create_accelerator_view. More on those in a later post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • How to Set Up Your Enterprise Social Organization

    - by Mike Stiles
    The rush for business organizations to establish, grow, and adopt social was driven out of necessity and inevitability. The result, however, was a sudden, booming social presence creating touch points with customers, partners and influencers, but without any corporate social organization or structure in place to effectively manage it. Even today, many business leaders remain uncertain as to how to corral this social media thing so that it makes sense for their enterprise. Imagine their panic when they hear one of the most beneficial approaches to corporate use of social involves giving up at least some hierarchical control and empowering employees to publicly engage customers. And beyond that, they should also be empowered, regardless of their corporate status, to engage and collaborate internally, spurring “off the grid” innovation. An HBR blog points out that traditionally, enterprise organizations function from the top down, and employees work end-to-end, structured around business processes. But the social enterprise opens up structures that up to now have not exactly been embraced by turf-protecting executives and managers. The blog asks, “What if leaders could create a future where customers, associates and suppliers are no longer seen as objects in the system but as valued sources of innovation, ideas and energy?” What if indeed? The social enterprise activates internal resources without the usual obsession with position. It is the dawn of mass collaboration. That does not, however, mean this mass collaboration has to lead to uncontrolled chaos. In an extended interview with Oracle, Altimeter Group analyst Jeremiah Owyang and Oracle SVP Reggie Bradford paint a complete picture of today’s social enterprise, including internal organizational structures Altimeter Group has seen emerge. One sign of a mature social enterprise is the establishing of a social Center of Excellence (CoE), which serves as a hub for high-level social strategy, training and education, research, measurement and accountability, and vendor selection. This CoE is led by a corporate Social Strategist, most likely from a Marketing or Corporate Communications background. Reporting to them are the Community Managers, the front lines of customer interaction and engagement; business unit liaisons that coordinate the enterprise; and social media campaign/product managers, social analysts, and developers. With content rising as the defining factor for social success, Altimeter also sees a Content Strategist position emerging. Across the enterprise, Altimeter has seen 5 organizational patterns. Watching the video will give you the pros and cons of each. Decentralized - Anyone can do anything at any time on any social channel. Centralized – One central groups controls all social communication for the company. Hub and Spoke – A centralized group, but business units can operate their own social under the hub’s guidance and execution. Most enterprises are using this model. Dandelion – Each business unit develops their own social strategy & staff, has its own ability to deploy, and its own ability to engage under the central policies of the CoE. Honeycomb – Every employee can do social, but as opposed to the decentralized model, it’s coordinated and monitored on one platform. The average enterprise has a whopping 178 social accounts, nearly ¼ of which are usually semi-idle and need to be scrapped. The last thing any C-suite needs is to cope with fragmented technologies, solutions and platforms. It’s neither scalable nor strategic. The prepared, effective social enterprise has a technology partner that can quickly and holistically integrate emerging platforms and technologies, such that whatever internal social command structure you’ve set up can continue efficiently executing strategy without skipping a beat. @mikestiles

    Read the article

  • How to Set Up Your Enterprise Social Organization?

    - by Richard Lefebvre
    By Mike Stiles on Dec 04, 2012 The rush for business organizations to establish, grow, and adopt social was driven out of necessity and inevitability. The result, however, was a sudden, booming social presence creating touch points with customers, partners and influencers, but without any corporate social organization or structure in place to effectively manage it. Even today, many business leaders remain uncertain as to how to corral this social media thing so that it makes sense for their enterprise. Imagine their panic when they hear one of the most beneficial approaches to corporate use of social involves giving up at least some hierarchical control and empowering employees to publicly engage customers. And beyond that, they should also be empowered, regardless of their corporate status, to engage and collaborate internally, spurring “off the grid” innovation. An HBR blog points out that traditionally, enterprise organizations function from the top down, and employees work end-to-end, structured around business processes. But the social enterprise opens up structures that up to now have not exactly been embraced by turf-protecting executives and managers. The blog asks, “What if leaders could create a future where customers, associates and suppliers are no longer seen as objects in the system but as valued sources of innovation, ideas and energy?” What if indeed? The social enterprise activates internal resources without the usual obsession with position. It is the dawn of mass collaboration. That does not, however, mean this mass collaboration has to lead to uncontrolled chaos. In an extended interview with Oracle, Altimeter Group analyst Jeremiah Owyang and Oracle SVP Reggie Bradford paint a complete picture of today’s social enterprise, including internal organizational structures Altimeter Group has seen emerge. One sign of a mature social enterprise is the establishing of a social Center of Excellence (CoE), which serves as a hub for high-level social strategy, training and education, research, measurement and accountability, and vendor selection. This CoE is led by a corporate Social Strategist, most likely from a Marketing or Corporate Communications background. Reporting to them are the Community Managers, the front lines of customer interaction and engagement; business unit liaisons that coordinate the enterprise; and social media campaign/product managers, social analysts, and developers. With content rising as the defining factor for social success, Altimeter also sees a Content Strategist position emerging. Across the enterprise, Altimeter has seen 5 organizational patterns. Watching the video will give you the pros and cons of each. Decentralized - Anyone can do anything at any time on any social channel. Centralized – One central groups controls all social communication for the company. Hub and Spoke – A centralized group, but business units can operate their own social under the hub’s guidance and execution. Most enterprises are using this model. Dandelion – Each business unit develops their own social strategy & staff, has its own ability to deploy, and its own ability to engage under the central policies of the CoE. Honeycomb – Every employee can do social, but as opposed to the decentralized model, it’s coordinated and monitored on one platform. The average enterprise has a whopping 178 social accounts, nearly ¼ of which are usually semi-idle and need to be scrapped. The last thing any C-suite needs is to cope with fragmented technologies, solutions and platforms. It’s neither scalable nor strategic. The prepared, effective social enterprise has a technology partner that can quickly and holistically integrate emerging platforms and technologies, such that whatever internal social command structure you’ve set up can continue efficiently executing strategy without skipping a beat. @mikestiles

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >