Search Results

Search found 6497 results on 260 pages for 'minimum spanning tree'.

Page 140/260 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • algorithm for Virtual Machine(VM) Consolidation in Cloud

    - by devansh dalal
    PROBLEM: We have N physical machines(PMs) each with ram Ri, cpu Ci and a set of currently scheduled VMs each with ram requirement ri and ci respectively Moving(Migrating) any VM from one PM to other has a cost associated which depends on its ram ri. A PM with no VMs is shut down to save power. Our target is to minimize the weighted sum of (N,migration cost) by migrating some VMs i.e. minimize the number of working PMs as well as not to degrade the service level due to excessive migrations. My Approach: Brute Force approach is choosing the minimum loaded PM and try to fit its VMs to other PMs by First Fit Decreasing algorithm or we can select the victim PMs and target PMs based on their loading level and shut down victims if possible by moving their VMs to targets. I tried this Greedy approach on the Data of Baadal(IIT-D cloud) but It isn't giving promising results. I have also tried to study the Ant colony optimization for dynamic VM consolidating but was unable to understand very much. I used the links. http://dumas.ccsd.cnrs.fr/docs/00/72/52/15/PDF/Esnault.pdf http://hal.archives-ouvertes.fr/docs/00/72/38/56/PDF/RR-8032.pdf Would anyone please explain the solution or suggest any new approach for better performance soon. Thanks in advance.

    Read the article

  • How do I fix the HDMI/DVI display output with Intel HD 4000 Graphics in 12.04?

    - by YumYumYum
    I have an Alienware Dell PC with Intel HD 4000 Graphics (Ivy Bridge) as verified by the output of lspci | grep VGA posted below. 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) The PC only has HDMI and DVI display outputs and using the HDMI output I am only being offered abnormal resolutions. As you can see below it does not even list HDMI1 or DVI1 but just only a fallback. $ export DISPLAY=:0.0 && xrandr xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1360 x 768, maximum 1360 x 768 default connected 1360x768+0+0 0mm x 0mm 1360x768 0.0* 1024x768 0.0 800x600 0.0 640x480 0.0 How can I fix this? Does it just need to be configured differently or will I need to use a newer kernel (as Intel Graphics drivers are included in the kernel)? Follow up: kernel to latest Step 1: Go to: http://kernel.ubuntu.com/~kernel-ppa/mainline/ Go to last: http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/ Download: http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-headers-3.6.0-030600rc3-generic_3.6.0-030600rc3.201208221735_amd64.deb http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-headers-3.6.0-030600rc3_3.6.0-030600rc3.201208221735_all.deb http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-image-3.6.0-030600rc3-generic_3.6.0-030600rc3.201208221735_amd64.deb http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-image-extra-3.6.0-030600rc3-generic_3.6.0-030600rc3.201208221735_amd64.deb Step 2: sudo dpkg -i linux*.deb Step 3: reboot which shows that i have Ubuntu 12.04 with latest $ uname -a Linux sun-Alienware-X51 3.6.0-030600rc3-generic #201208221735 SMP Wed Aug 22 21:36:32 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux But still same problem remain.

    Read the article

  • Questions before I revamp my rendering engine to use shaders (GLSL)

    - by stephelton
    I've written a fairly robust rendering engine using OpenGL ES 1.1 (fixed-function.) I've been looking into revamping the engine to use OpenGL ES 2.0, which necessitates that I use shaders. I've been absorbing information all day long and still have some questions. Firstly, lighting. The fixed-function pipeline is guaranteed to have at least 8 lights available. My current engine finds lights that are "close" to the primitives being drawn and enables them; I don't know how many lights are going to be enabled until I draw a given model. Nothing is dynamically allocated in GLSL, so I have to define in a shader some number of lights to be used, right? So if I want to stick with 8, should I write my general purpose shader to have 8 lights and then use uniforms to tell it how many / which lights to use? Which brings me to another question: should I be concerned with the amount of data I'm allocating in a shader? Recent video cards have hundreds of "stream processors." If I've got a fragment shader being used on some number of fragments in a given triangle, I assume they must each have their own stack to work on. Are read-only variables copied here, or read when needed? My initial goal is to rework my code so that it is virtually identical to the current implementation. What I have in mind is to create my own matrix stack so that I can implement something along the lines of push/popMatrix and apply all my translations, rotations, and scales to this matrix, then provide the matrix to the vertex shader so that it can make very quick vertex translations. Is this approach sound? Edit: My original intention was to ask if there was a tutorial that would explain the bare minimum necessary to jump from fixed-function to using shaders. Thanks!

    Read the article

  • Any mobile-friendly Credit Card billing solutions for mobile sites similar to Bango?

    - by Programmer
    Are there any mobile-friendly Credit Card billing solutions for mobile sites similar to Bango? The advantages of Bango I have seen compared to regular Credit Card solutions that make it considerably "mobile-friendly" are: 1) It does not require the user to enter their full name and billing address to make a payment. The user is only required to enter their Credit Card number, expiration date, and CVC code (if they are in the U.S., they will also have to enter their Zip Code). That is significantly less input than is normally required for Credit Card payments, which is a big plus on small mobile key pads. After a user makes an initial Credit Card payment, their details are stored by Bango, and the next time the user needs to make a payment with the same Credit Card, they just have to click a single link and it processes the payment on their stored Credit Card. Needless to say, this is very convenient for mobile users as it is analogous to Direct Carrier Billing as far as the user is concerned since they won't need to input any details. The downside with Bango is that their fees are higher than others, all payments must be processed via their site and branding, there is a high minimum ($1.99) and a low maximum ($30) on how much you can charge users, and you need to pay a monthly fee on top of the high transaction costs. It is due to the downsides mentioned above that I am looking for an alternative solution that also does the advantages 1) and 2) above. Is there anything like that? I looked at JunglePay and they do neither 1) nor 2).

    Read the article

  • How to force Multiple Monitors correct resolutions for LightDM?

    - by Hanynowsky
    I am affected by the BUG: https://bugs.launchpad.net/ubuntu/+source/unity-greeter/+bug/874241 Otherwise, if like me you have a laptop connected to a second monitor of higher resolution, LIGHTDM at the login stage, mirrors the displays in both screens and assign to them a common resolution (1024X768) in my case, instead of extending the desktop (Primary screen with the greeter and secondary with just a logo as mentioned in the Multiple Monitors UX specifications book for 12.04). Here is my xrandr -q @L502X:~$ xrandr -q Screen 0: minimum 320 x 200, current 1920 x 1848, maximum 8192 x 8192 LVDS1 connected 1366x768+309+1080 (normal left inverted right x axis y axis) 344mm x 193mm 1366x768 60.0*+ 1360x768 59.8 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 510mm x 287mm 1920x1080 60.0*+ 1600x1200 60.0 1680x1050 60.0 1280x1024 60.0 1440x900 59.9 1280x960 60.0 1280x800 59.8 1024x768 60.0 800x600 60.3 56.2 640x480 60.0 DP1 disconnected (normal left inverted right x axis y axis) I tried to force lightdm to execute some xrandr commands in order to set the right resolution for each monitor and extend the desktop, but I get a LOW GRAPHICS MODE ERROR (You're running in low graphics mode, your screen, input devices...did not get detected..) I created a simple script named lightdmxrand.sh: #!/bin/sh xrandr --output HDMI1 --primary --mode 1920x1080 --output LVDS1 --mode 1366x768 --below HDMI1 And told lightdm to run it : /etc/lightdm/lightdm.conf [SeatDefaults] greeter-session=unity-greeter user-session=ubuntu greeter-setup-script=/usr/bin/numlockx on display-setup-script=/home/hanynowsky/lightdmxrandr.sh Someone knows what is wrong!? Thanks in advance.

    Read the article

  • How to approach scrum task burn down when tasks have multiple peoples involvement?

    - by AgileMan
    In my company, a single task can never be completed by one individual. There is going to be a separate person to QA and Code Review each task. What this means is that each individual will give their estimates, per task, as to how much time it will take to complete. The problem is, how should I approach burn down? If I aggregate the hours together, assume the following estimate: 10 hrs - Dev time 4 hrs - QA 4 hrs - Code Review. Task Estimate = 18hrs At the end of each day I ask that the task be updated with "how much time is left until it is done". However, each person generally just thinks about their part of it. Should they mark the effort remaining, and then ADD the effort estimates to that? How are you guys doing this? UPDATE To help clarify a few things, at my organization each Task within a story requires 3 people. Someone to develop the task. (do unit tests, ect...) A QA specialist to review task (they primarily do integration and regression tests) A Tech lead to do code review. I don't think there is a wrong way or a right way, but this is our way ... and that won't be changing. We work as a team to complete even the smallest level of a story whenever possible. You cannot actually test if something works until it is dev complete, and you cannot review the quality of the code either ... so the best you can do is split things up into small logical slices so that the bare minimum functionality can be tested and reviewed as early into the process as possible. My question to those that work this way would be how to burn down a "task" when they are setup this way. Unless a Task has it's own sub-tasks (which JIRA doesn't allow) ... I'm not sure the best way to accomplish tracking "what's left" on a daily basis.

    Read the article

  • Accessing second hard drive

    - by Jonathan
    So I recently installed Ubuntu 10.10 64-bit on my computer. I installed it on my 60gb SSD hard drive, and in the installation it never acknowledged the existence of my second hard drive. The hard drive that I keep all my files on, and which I want to make my home folder if I can, is a Western Digital Caviar Black 1TB SATA 6Gb/s 64MB cache (WD1002FAEX). I've read the following: https://help.ubuntu.com/community/Mount but honestly cannot work out how to access the hard drive from my Ubuntu installation. I did have Windows 7 64-bit prior to installing Ubuntu. I have backed up all the files on the hard drive, but if I could just access them straight off that would be super cool. Does anyone know how I can use the second hard drive? Thank you for your help EDIT: The following directories are currently in my /dev/ folder: ati/, block/, bsg/, bus/, char/, cpu/, isk/, input/, mapper/, net/, pktcdvd/, pts/, shm/, snd/, and usb/ EDIT: Result from sudo fdisk -l Disk /dev/sda: 60.0 GB, 60022480896 bytes 255 heads, 63 sectors/track, 7297 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000d2dfd Device Boot Start End Blocks Id System /dev/sda1 * 1 6994 56174592 83 Linux /dev/sda2 6994 7298 2438145 5 Extended /dev/sda5 6994 7298 2438144 82 Linux swap / Solaris @djeykib So very close to fixing it.. unfortunately on the last command you gave it says this: $ sudo apt-get install linux-lts-backport-natty Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-lts-backport-natty Checking on http://www.ubuntuupdates.org/ppas reveals that it is only available for 10.04. Looks like I'll have to unplug and re-plug hardware if I want it working still :(

    Read the article

  • How can I triple boot Xubuntu, Ubuntu and Windows?

    - by ag.restringere
    Triple Booting Xubuntu, Ubuntu and Windows I'm an avid Xubuntu (Ubuntu + XFCE) user but I also dual boot with Windows XP. I originally created 3 partitions and wanted to use the empty one as a storage volume but now I want to install Ubuntu 12.04 LTS (the one with Unity) to do advanced testing and packaging. Ideally I would love to keep these two totally separate as I had problems in the past with conflicts between Unity and XFCE. This way I could wipe the Ubuntu w/ Unity installation if there are problems and really mess around with it. My disk looks like this: /dev/sda1 -- Windows XP /dev/sda2 -- Disk /dev/sda: 200.0 GB, 200049647616 bytes 255 heads, 63 sectors/track, 24321 cylinders, total 390721968 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Device Boot Start End Blocks Id System /dev/sda1 * 63 78139454 39069696 7 HPFS/NTFS/exFAT /dev/sda2 78141440 156280831 39069696 83 Linux /dev/sda3 156282878 386533375 115125249 5 Extended /dev/sda4 386533376 390721535 2094080 82 Linux swap / Solaris /dev/sda5 156282880 386533375 115125248 83 Linux Keep each in it's own partition and totally separate and be able to select from each of the three systems from the GRUB boot menu... sda1 --- [Windows XP] sda2 --- [Ubuntu 12.04] "Unity" sda3(4,5) -- [Xubuntu 12.02] "Primary XFCE" What is the safest and easiest way to do this without messing my system up and requiring invasive activity?

    Read the article

  • Is it possible to get xRandR to see two separate outputs with the nvidia driver?

    - by rumtscho
    I have two monitors, which I have set up with nvidia-settings in Twinview. The result: When I want to do something in xRandR, it does not function. It doesn't report one output per video card head, but a single output mapped to the combined area of both monitors: rumtscho@bradbury:~$ xrandr xrandr: Failed to get size of gamma for output default Screen 0: minimum 3840 x 1440, current 3840 x 1440, maximum 3840 x 1440 default connected 3840x1440+0+0 0mm x 0mm 3840x1440 50.0* Now I promised somebody to help test a driver. The developer is using an open source driver for Intel video cards, and his driver assumes that there is more than one xRandR output, each mapped to a monitor. So I tried rewriting my xorg.conf to somehow get two outputs to show up, but failed. Googling showed that people faced with the xRandR-nvidia problem either stopped using xRandR and achieved what they needed with nvidia-settings, or changed their driver to nouveau. The first is not going to help in my situation, and I am not willing to give up the proprietary driver, because Compiz won't work without it. So does anybody know a way to get nvidia to actually pass on information on outputs to xRandR?

    Read the article

  • What is wrong with my logic for the divide and conquer algorithm for Closest pair problem?

    - by Programming Noob
    I have been following Coursera's course on Algorithms and came up with a thought about the divide/conquer algorithm for the closest pair problem, that I want clarified. As per Prof Roughgarden's algorithm (which you can see here if you're interested): For a given set of points P, of which we have two copies - sorted in X and Y direction - Px and Py, the algorithm can be given as closestPair(Px,Py): Divide points into left half - Q, and right half - R, and form sorted copies of both halves along x and y directions - Qx,Qy,Rx,Ry Let closestPair(Qx,Qy) be points p1 and q1 Let closestPair(Rx,Ry) be p2,q2 Let delta be minimum of dist(p1,q1) and dist(p2,q2) This is the unfortunate case, let p3,q3 be the closestSplitPair(Px,Py,delta) Return the best result Now, the clarification that I want is related to step 5. I should say this beforehand, that what I'm suggesting, is barely any improvement at all, but if you're still interested, read ahead. Prof R says that since the points are already sorted in X and Y directions, to find the best pair in step 5, we need to iterate over points in the strip of width 2*delta, starting from bottom to up, and in the inner loop we need only 7 comparisions. Can this be bettered to just one? How I think is possible seemed a little difficult to explain in plain text, so I drew a diagram and wrote it on paper and uploaded it here: Since no one else came up with is, I'm pretty sure there's some error in my line of thought. But I have literally been thinking about this for HOURS now, and I just HAD to post this. It's all that is in my head. Can someone point out where I'm going wrong?

    Read the article

  • Banshee gapless playback does not work when playing mp3s

    - by ComputerGuy505
    Even though I have gapless playback enabled in Banshee's settings menu, there is a very short pause between songs. This might be due to the fact that my hard drive's partitions seem wierd. fdisk -l produces this output: Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x4a73c3cb Device Boot Start End Blocks Id System /dev/sda1 * 2048 409599 203776 7 HPFS/NTFS/exFAT /dev/sda2 409600 724153740 361872070+ 7 HPFS/NTFS/exFAT /dev/sda3 1456826368 1465145343 4159488 c W95 FAT32 (LBA) /dev/sda4 724154366 1456826367 366336001 5 Extended Partition 4 does not start on physical sector boundary. /dev/sda5 1440159744 1456826367 8333312 82 Linux swap / Solaris /dev/sda6 724154368 1440159743 358002688 83 Linux Partition table entries are not in disk order Playing mp3's from /dev/sda2 or /dev/sda6 produces this problem. I don't seem to have gapless playback on Rhythmbox or Clementine either, if those media players are supposed to have it. I'm not sure what other info to provide. This is just annoying to me. Thanks for any help.

    Read the article

  • Second Monitor Detected, but not receiving a signal after upgrading to 12.04

    - by user62458
    After I upgraded to 12.04, my second monitor is detected (in display settings), but will not power on. I have scoured the Internet and forums for a solution and I can't find anything. I have found a couple people with the same problem, but never a solution for it. I am no expert, but I'm certainly not a noob. My computer uses AMD Radeon 6250 graphics, but I do NOT want to use the proprietary graphics drivers. They refuse to work properly with my second monitor (they ATI drivers will only mirror screens, and I've done everything to try to fix it, and I DON't want mirrored screens) Not to mention that the default open-source video drivers seem to work much better than the proprietary anyway! Again, Ubuntu's default video drivers work fine, and they even DETECT the second monitor (Dell 19'). I can drag stuff off the screen and put it on the 'space' of the second monitor and even a screen-shot shows that there are two monitors active; but the monitor is OFF. It will not power on. It goes into 'power-save' mode because it is not receiving a signal. For some reason it is not getting the signal to power on, even though Ubuntu thinks the monitor is working properly. I had this working fine on my Sony VAIO yesterday (with Radeon graphics/default Ubuntu video drivers). I upgraded to a Samsung Series 3 and now I have this issue. I can't for the life of me figure out why the monitor is connected, detected and I have screen space for the monitor, but the screen won't turn on! XRANDR Output: Screen 0: minimum 320 x 200, current 1366 x 768, maximum 8192 x 8192 VGA-0 connected (normal left inverted right x axis y axis) 1440x900 59.9 + 75.0 1280x1024 75.0 60.0 1152x864 75.0 1024x768 75.1 70.1 60.0 832x624 74.6 800x600 72.2 75.0 60.3 56.2 640x480 72.8 75.0 66.7 60.0 720x400 70.1 LVDS connected 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 194mm 1366x768 60.1*+ 1280x720 59.9 1152x768 59.8 1024x768 59.9 800x600 59.9 848x480 59.7 720x480 59.7 640x480 59.4 HDMI-0 disconnected (normal left inverted right x axis y axis)

    Read the article

  • 1080p Screen resolution problem after 10.04 to 12.04 update

    - by Ale
    I have a Samsung LCD 40" with a NVidia GeForce 6150SE nForce 430 Card. I recently upgraded from 10.04 to 12.04 and the best resolution I can get is 1360x768. I've tried the propietary drivers available on the repository kmod:nvidia_current kmod:nvidia_173_updates kmod:nvidia_current_updates kmod:nvidia_96 kmod:nvidia_96_updates kmod:nvidia_173 I've also downloaded latest from NVidia's Web, version: 295.40. But still no luck. With Nouveau driver, I can only get 1024x768. I know there is no problem with my hardware (video card, cable and monitor), I was using it perfectly on 10.04. Can anybody suggest something else I could try, to get my 1920x1080 resolution back? Thanks in advance. Here are some more information, that I got reading other similar posts on askubuntu. $ lspci | grep VGA 00:0d.0 VGA compatible controller: NVIDIA Corporation C61 [GeForce 6150SE nForce 430] (rev a2) $ xrandr xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 240, current 1360 x 768, maximum 1360 x 768 default connected 1360x768 0 0 0mm x 0mm 1360x768 50.0 52.0* 1024x768 51.0 800x600 53.0 54.0 55.0 680x384 56.0 57.0 640x480 58.0 576x432 59.0 512x384 60.0 400x300 61.0 62.0 63.0 320x240 64.0

    Read the article

  • Is there a way to make catalyst driver work in Trusty for the radeon hd4330?

    - by Laurent BERNABE
    Though official Catalyst software 13.1 is suitable for ati radeon hd4330, it can't be installed on Ubuntu 14.04 as it can't support Xorg = 7.6 As I need proprietary drivers for trusty, I would like to know if there is a way to bypass this limitation ? (For example by fetching driver sources) Here are some results from the terminal : $ Xorg -version X.Org X Server 1.15.1 Release Date: 2014-04-13 X Protocol Version 11, Revision 0 Build Operating System: Linux 3.2.0-37-generic x86_64 Ubuntu Current Operating System: Linux bordeaux80 3.13.0-27-generic #50-Ubuntu SMP Thu May 15 18:06:16 UTC 2014 x86_64 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.13.0-27-generic root=UUID=4015e6f7-d11a-45fd-ac9b-5b6c7ab9eaa0 ro quiet splash vt.handoff=7 Build Date: 16 April 2014 01:36:29PM xorg-server 2:1.15.1-0ubuntu2 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.30.2 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. $ xrandr Screen 0: minimum 320 x 200, current 1366 x 768, maximum 8192 x 8192 LVDS connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 353mm x 198mm 1366x768 60.0*+ 1280x720 59.9 1152x768 59.8 1024x768 59.9 800x600 59.9 848x480 59.7 720x480 59.7 640x480 59.4 VGA-0 disconnected (normal left inverted right x axis y axis) HDMI-0 disconnected (normal left inverted right x axis y axis) $ uname -rp 3.13.0-27-generic x86_64 $ glxinfo | grep OpenGL OpenGL vendor string: X.Org OpenGL renderer string: Gallium 0.4 on AMD RV710 OpenGL core profile version string: 3.1 (Core Profile) Mesa 10.1.0 OpenGL core profile shading language version string: 1.40 OpenGL core profile context flags: (none) OpenGL core profile extensions: OpenGL version string: 3.0 Mesa 10.1.0 OpenGL shading language version string: 1.30 OpenGL context flags: (none) OpenGL extensions: Regards

    Read the article

  • Ubuntu 12.04 resolution stuck on 640x480

    - by user212483
    I am new to Ubuntu and I was trying to get my HDMI enabled TV to work with my Ubuntu 12.04 computer and I installed a Nvidia driver using the "additional drivers" program. After that didn't work, I started playing around with the dual booted windows 7 on my computer. Now, I've never used that windows since I installed it so I was stripped down to bare minimum so I tried to adjust the resolution(as it was on lowest resolution) and tried to connect the HDMI, which didn't work. After that I came back to my Ubuntu installation only to find out that it is now stuck on 640x480 resolution. I tried to remove the driver that I installed again using the "additional drivers" program but that didn't help at all. The error that showed up was - Could not apply the stored configuration for monitors none of the selected modes were compatible with the possible modes: Trying modes for CRTC 63 CRTC 63: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 0) CRTC 63: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 1) Trying modes for CRTC 64 CRTC 64: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 0) CRTC 64: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 1) Any help would be appreciated as this is very annoying. Thanks

    Read the article

  • Does Google Analytics exclude Campaign traffic from Facebook in the Social reports?

    - by user1612223
    For a while we have used campaign tags when putting posts on Facebook so that we can run campaign reports in Google analytics on those links. However it appears that traffic from those links are being excluded in Google's Social reports. For example between 7/20 and 8/19 I'm seeing 123 Visits where Facebook is the source in my Campaigns report, but only 29 Visits where Facebook is the source in my Social Sources report. Main questions: Does Google exclude campaign traffic from it's social reports? If it does, is there any way to reconcile that so that the traffic shows up in both reports? If it doesn't, what could be causing the vast discrepancy? One observer noted that we are setting the Medium to "Post" when passing the campaign parameters, and that Google may only allow "Referral" traffic in it's social reports (Just speculation). In that case we could potentially change the Medium to "Referral", but that would undermine some of our strategy in being able to set different mediums. I have also considered that maybe the campaign traffic came to the site several times, and the social report may count the same user as less visits, however over 70% of the Facebook campaign traffic is new traffic, so at a minimum there would need to be over 85 Visits on the Social side for that argument to be valid. I've done several searches for any information on this topic, and haven't run across much of anything. I did post the same question on Google's Product Forum and have not gotten a response. The title of that question was 'Facebook Campaign Traffic Not Showing in Social Reports'. The inability to pass campaign data on Facebook posts would make evaluating the performance of those specific posts very difficult, so I'm hoping there is a solution to this.

    Read the article

  • Dealing with Fine-Grained Cache Entries in Coherence

    - by jpurdy
    On occasion we have seen significant memory overhead when using very small cache entries. Consider the case where there is a small key (say a synthetic key stored in a long) and a small value (perhaps a number or short string). With most backing maps, each cache entry will require an instance of Map.Entry, and in the case of a LocalCache backing map (used for expiry and eviction), there is additional metadata stored (such as last access time). Given the size of this data (usually a few dozen bytes) and the granularity of Java memory allocation (often a minimum of 32 bytes per object, depending on the specific JVM implementation), it is easily possible to end up with the case where the cache entry appears to be a couple dozen bytes but ends up occupying several hundred bytes of actual heap, resulting in anywhere from a 5x to 10x increase in stated memory requirements. In most cases, this increase applies to only a few small NamedCaches, and is inconsequential -- but in some cases it might apply to one or more very large NamedCaches, in which case it may dominate memory sizing calculations. Ultimately, the requirement is to avoid the per-entry overhead, which can be done either at the application level by grouping multiple logical entries into single cache entries, or at the backing map level, again by combining multiple entries into a smaller number of larger heap objects. At the application level, it may be possible to combine objects based on parent-child or sibling relationships (basically the same requirements that would apply to using partition affinity). If there is no natural relationship, it may still be possible to combine objects, effectively using a Coherence NamedCache as a "map of maps". This forces the application to first find a collection of objects (by performing a partial hash) and then to look within that collection for the desired object. This is most naturally implemented as a collection of entry processors to avoid pulling unnecessary data back to the client (and also to encapsulate that logic within a service layer). At the backing map level, the NIO storage option keeps keys on heap, and so has limited benefit for this situation. The Elastic Data features of Coherence naturally combine entries into larger heap objects, with the caveat that only data -- and not indexes -- can be stored in Elastic Data.

    Read the article

  • How does this circle collision detection math work?

    - by Griffin
    I'm going through the wildbunny blog to learn about collision detection. I'm confused about how the vectors he's talking about come into play. Here's the part that confuses me: p = ||A-B|| – (r1+r2) The two spheres are penetrating by distance p. We would also like the penetration vector so that we can correct the penetration once we discover it. This is the vector that moves both circles to the point where they just touch, correcting the penetration. Importantly it is not only just a vector that does this, it is the only vector which corrects the penetration by moving the minimum amount. This is important because we only want to correct the error, not introduce more by moving too much when we correct, or too little. N = (A-B) / ||A-B|| P = N*p Here we have calculated the normalised vector N between the two centres and the penetration vector P by multiplying our unit direction by the penetration distance. I understand that p is the distance by which the circles penetrate, but I don't get what exactly N and P are. It seems to me N is just the coordinates of the 3rd point of the right trianlge formed by point A and B (A-B) then being divided by the hypotenuse of that triangle or distance between A and B (||A-B||). What's the significance of this? Also, what is the penetration vector used for? It seems to me like a movement that one of the circles would perform to get un-penetrated.

    Read the article

  • What's so great about Clojure?

    - by marco-fiset
    I've been taking a look at Clojure lately and I stumbled upon this post on Stackoverflow that indicates some projects following best practices, and overall good Clojure code. I wanted to get my head around the language after reading some basic tutorials so I took a look at some "real-world" projects. After looking at ClojureScript and Compojure (two the the aforementioned "good" projects), I just feel like Clojure is a joke. I don't understand why someone would pick Clojure over say, Ruby or Python, two languages that I love and have such a clean syntax and are very easy to pick up whereas Clojure uses so much parenthesis and symbols everywhere that it ruins the readability for me. I think that Ruby and Python are beautiful, readable and elegant. They are easy to read even for someone who does not know the language inside out. However, Clojure is opaque to me and I feel like I must know every tiny detail about the language implementation in order to be able to understand any code. So please, enlighten me! What is so good about Clojure? What is the absolute minimum that I should know about the language in order to appreciate it?

    Read the article

  • Partition does not start on physical sector boundary?

    - by jasmines
    I've one HD on my laptop, with two partitions (one ext3 with Ubuntu 12.04 installed and one swap). fdisk is giving me a Partition 1 does not start on physical sector boundary warning. What is the cause and do I need to fix it? If so, how? This is sudo fdisk -l: Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 testine, 63 settori/tracce, 91201 cilindri, totale 1465149168 settori Unità = settori di 1 * 512 = 512 byte Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Identificativo disco: 0x5a25087f Dispositivo Boot Start End Blocks Id System /dev/sda1 * 63 1448577023 724288480+ 83 Linux Partition 1 does not start on physical sector boundary. /dev/sda2 1448577024 1465147391 8285184 82 Linux swap / Solaris This is sudo lshw related result: *-disk description: ATA Disk product: WDC WD7500BPKT-0 vendor: Western Digital physical id: 0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: 01.0 serial: WD-WX21CC1T0847 size: 698GiB (750GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=5a25087f *-volume:0 description: EXT3 volume vendor: Linux physical id: 1 bus info: scsi@0:0.0.0,1 logical name: /dev/sda1 logical name: / version: 1.0 serial: cc5c562a-bc59-4a37-b589-805b27b2cbd7 size: 690GiB capacity: 690GiB capabilities: primary bootable journaled extended_attributes large_files recover ext3 ext2 initialized configuration: created=2010-02-27 09:18:28 filesystem=ext3 modified=2012-06-23 18:33:59 mount.fstype=ext3 mount.options=rw,relatime,errors=remount-ro,user_xattr,barrier=1,data=ordered mounted=2012-06-28 00:20:47 state=mounted *-volume:1 description: Linux swap volume physical id: 2 bus info: scsi@0:0.0.0,2 logical name: /dev/sda2 version: 1 serial: 16a7fee0-be9e-4e34-9dc3-28f4eeb61bf6 size: 8091MiB capacity: 8091MiB capabilities: primary nofs swap initialized configuration: filesystem=swap pagesize=4096 These are related /etc/fstab lines: UUID=cc5c562a-bc59-4a37-b589-805b27b2cbd7 / ext3 errors=remount-ro,user_xattr 0 1 UUID=16a7fee0-be9e-4e34-9dc3-28f4eeb61bf6 none swap sw 0 0

    Read the article

  • When is the right time to join open source project for programmer?

    - by Mahesh
    Most of the newcomers in programming start with basic projects to start with programming. Most of the C++ progammers spend some time with puzzles and contests but this is not always helpful. Sometimes you've to spend some time on real projects. Starting your own open source project could be a problem in self-learning for newbie cause of lack of mentors and peers who can't look at your code and give suggestions. Open source projects can solve this problem, some projects could be best suited for new programmers. Besides everybody is newbie at some point. So i'll try and make this question a bit from beginners perspective. I tried few questions on stack overflow before asking this like How do i join & Bare minimum you need and how to get involved with open source and what level of programming etc. But this is not helping me when it comes to self-evaluating with skills. How to find that out ? How can i check what it takes to join open source project and am i really that comfortable with huge source code etc. My question is when to consider yourself comfortable joining open source programming ? I mean how will you test yourself that you're ready to take burden of big/small projects of open source ? how will you test yourself to see if you could work with version control/other programmers/tight schedule etc ?

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

  • Where to place php libraries/extensions?

    - by gdaniel
    I am new to a lot of server configurations and options. I want to add extra php libraries/extensions to my server. Where do I add them? (I am on a CENTOS 6.5 VPS) For example, I want to add the phpseclib php extension: Their website instructions are minimum: Usage This library is written using the same conventions that libraries in the PHP Extension and Application Repository (PEAR) used to be written in (current requirements break PHP4 compatibility). In particular, this library needs to be in your include_path: <?php set_include_path(get_include_path() . PATH_SEPARATOR . 'phpseclib'); include('Net/SSH2.php'); ?> It tells me how to use it, but it doesn't tell me where to add the actual extension files. Should I added it under? usr/local/lib ? usr/local/lib/php ? usr/local/lib/php/pear ? Or can I add it under public_html? Also, my VPS has several users under /home/.. is that away to make the library available for only one user?

    Read the article

  • Quick and Good: ( Requirement -> Validation -> Design ) for self use?

    - by Yugal Jindle
    How to casually do the required Software Engineering and designing? I am an inexperienced developer and face the following problem: My company is a start up and has no fix Software engineering systems. I am assigned tasks with not very clear and conflicting requirements. I don't have to follow any designs or verify requirements officially. Problem: I code all day and finally get stuck where requirement conflicts and I have to start over again. I can-not spend a lot of time doing proper SRS or SDD. How should I: List out Requirements for myself. (Not an official document) How to verify and validate the requirements? How to visualize them? How to design them with minimum effort? (As its going to be with me only) I don't want to waste my time coding something that's gonna collapse according to requirement conflict or something! I don't want to compromise with quality but don't want to re-write everything on some change that I didn't expected. I imagine making a diagram for my thought process that will show me conflict in the diagram itself, then finally correcting the diagram - I decide my design and structure my code in terms of interfaces or something. And then finally start implementing my design. I am able to sense the lack of systematic approach, but don't know how to proceed! Update: Please suggest me some tools that can ask me the questions and help me aggregate important details. How can I have diagram that I talked about for requirement verification?

    Read the article

  • How to mount drive in /media/userName/ like nautilus do using udisks

    - by Bsienn
    As of my current installation of Ubuntu 13.10 Unity, when i click on a drive in nautilus it get mounted in /media/username/mountedDrive i read that nautilus use udisks to do that. Basically i want to auto mount my drive using udisks in start up using this method But problem is, it mounts the drive in /media/mountedDrive, but i want it the way nautilus do in /media/username/mounteDrive I want NTFS Data drive to be auto mounted at /media/bsienn/ bsienn@bsienn-desktop:~$ blkid /dev/sda1: LABEL="System Reserved" UUID="8230744030743D6B" TYPE="ntfs" /dev/sda2: LABEL="Windows 7" UUID="60100EA5100E81F0" TYPE="ntfs" /dev/sda3: LABEL="Data" UUID="882C04092C03F14C" TYPE="ntfs" /dev/sda5: UUID="8768800f-59e1-41a2-9092-c0a8cb60dabf" TYPE="swap" /dev/sda6: LABEL="Ubuntu Drive" UUID="13ea474a-fb27-4c91-bae7-c45690f88954" TYPE="ext4" /dev/sda7: UUID="69c22e73-9f64-4b48-b854-7b121642cd5d" TYPE="ext4" bsienn@bsienn-desktop:~$ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160000000000 bytes 255 heads, 63 sectors/track, 19452 cylinders, total 312500000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x8d528d52 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 117730069 58761611 7 HPFS/NTFS/exFAT /dev/sda3 158690072 312494116 76902022+ 7 HPFS/NTFS/exFAT /dev/sda4 117731326 158689279 20478977 5 Extended /dev/sda5 137263104 141260799 1998848 82 Linux swap / Solaris /dev/sda6 141262848 158689279 8713216 83 Linux /dev/sda7 117731328 137263103 9765888 83 Linux Partition table entries are not in disk order bsienn@bsienn-desktop:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda7 during installation UUID=69c22e73-9f64-4b48-b854-7b121642cd5d / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=8768800f-59e1-41a2-9092-c0a8cb60dabf none swap sw 0 0 Desired effect: Picture link

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >