Search Results

Search found 9471 results on 379 pages for 'technology tid bits'.

Page 80/379 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • How-to: determine 64-bitness of Windows? [closed]

    - by warren
    Possible Duplicate: Tell the version of Windows XP (64-bits or 32-bits) Is it possible to determine whether a given installation of Windows is 32- or 64-bit? From right-clicking on My Computer, and selecting Properties, it appears that such information is not readily available. Typing ver at the command prompt also doesn't seem to return anything about the nature of the platform in which it is installed. Under Linux, I'd use uname -a to find out what kernel was running. Is there an analog on Windows?

    Read the article

  • How does Deep Zoom (Microsoft Seadragon) work?

    - by Norman Ramsey
    I saw an incredible on Photosynth and Seadragon, some "deep zoom" technology recently acquired by Microsoft. There's also a less responsive version on the web. I'd really like to know how the technology works. Blog entries and web pages welcome, but the ideal answer will include one or more references to published papers in the professional literature.

    Read the article

  • What does "cpuid level" mean?

    - by ogzylz
    For an example, I put just output from 1 core of a 16 core machine. What does the output mean by "cpuid level" of "6"? Also, what do "bogomips" of "5992.10" and "clflush size" of "64" mean? processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception: yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5992.10 clflush size : 64 cache_alignment: 128 address sizes: 40 bits physical, 48 bits virtual power management:

    Read the article

  • Can I grant permissions on files in windows 7 using a security identifier from another machine

    - by Thomas
    I have an external hard drive, and I wish to grant permissions on some files to users from 2 different computers without having to hook it up to the 2 different computers. I know the SID of the user on the other computer, I'd like to know if and how I can grant permissions to files using the SID. I'm running Windows 7 Professional 64 bits, and "The Other" computer Win 7 Home Premium 64 bits, they are not in a domain, but separate computers on a home network (not even same homegroup). Note: Duplicated question with: Is there a way to give NTFS file permissions to users from other Windows installations?

    Read the article

  • Why does 1GBit card have output limited to 80 MiB ?

    - by Gallus
    I'm trying to utilize maximal bandwidth provided by my 1GiB network card, but it's always limited to 80MiB (real megabytes). What can be the reason? Card description (lshw output): description: Ethernet interface product: DGE-530T Gigabit Ethernet Adapter (rev 11) vendor: D-Link System Inc physical id: 0 bus info: pci@0000:03:00.0 logical name: eth1 version: 11 serial: 00:22:b0:68:70:41 size: 1GB/s capacity: 1GB/s width: 32 bits clock: 66MHz capabilities: pm vpd bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation The card is placed in following PCI slot: *-pci:2 description: PCI bridge product: 82801 PCI Bridge vendor: Intel Corporation physical id: 1e bus info: pci@0000:00:1e.0 version: 92 width: 32 bits clock: 33MHz capabilities: pci subtractive_decode bus_master cap_list The PCI isn't any PCI Express right? It's a legacy PCI slot? So maybe this is the reason? OS is a linux.

    Read the article

  • How to use graphical line drawing characters with Midnight Commander on OS X under ssh?

    - by Sorin Sbarnea
    I discovered that when I do ssh to a machine using OS X 10.6 and use mc I do not see the graphical line drawing characters. This does not happen if I open terminal and start mc. I'm connecting using putty configured to use xterm-color, configuraton that works just fine if I do ssh to a linux machine. The mc from OS X is version 4.7.0 (installed using macports). What locale returns: LC_CTYPE="C" <== ssh LC_CTYPE="UTF-8" <== Terminal.app ssh: mc display bits shows: 7-bit ASCII (changing does not help, it defaults to the same value) Terminal.app: mc display bits shows: UTF-8 The environment shows TERM=xterm-color in both cases Terminal.app and ss but mc looks different. I filed a bug to mc with this information at http://www.midnight-commander.org/ticket/2339

    Read the article

  • PRNG test suite: bitstream and stream length

    - by Martin Trigaux
    On the NIST website, there is a tool called sts (Statistical Test Suite) that allow us to rest the validity of a pseudo-random number generator based on a stream of bits in input. When running the program, there is two variables I am not sure to understand : the stream length and number of bitstream. Is the stream length the size of the file ? The number of bit inside ? The size of a bitstream ? Are the bitstreams subset of the whole file ? Chosen how ? Let say I have a text file containing 1,000,000 bits in ascii. What should be my arguments ? You can find the user manual here if needed (I didn't find explanation about what are these variables in it). Thank you

    Read the article

  • How should I interpret the specifications of a SSD?

    - by paulgreg
    When considering to buy a SSD, how should I interpret the different specifications of the SSD? Here are some specific things that need to be deciphered: Controller (this can affect performance and endurance more than all other factors combined) Bus Technology Form Factor (Physical Size) Capacity NAND or NOR technology Power Consumption during Read, during Write, when Idle Read/Write Burst and Sustained Throughput All of these things I would like to be explained in more detail and their actual importance in selecting an SSD.

    Read the article

  • What's the difference between DisplayPort, DVI and HDMI?

    - by Leo Bushkin
    As an end consumer, are there any significant differences between the newer DisplayPort interface and DVI/HDMI that I should be aware of? I realize they are different connector types and require compatible equipment, I'm primarily interested in whether there are functional or performance benefits of one technology over another. Should I have a preference for one technology or another on newer video card equipment?

    Read the article

  • Very slow Windows 7 on Thinkpad T61

    - by bogdanf
    I have a very strange problem with my fresh install of Windows 7 Profesional, 64bits on my Lenovo Thinkpad T61 : The overal performance is very slow, the disk is constantly spinning, even without any program running (after boot, no other programs installed). The boot process is very slow itself (4-5 minutes). I mention that the laptop was fine on XP until the upgrade. Thanks ! Additional info (as requested by the comments) : 2GB RAM Yes, I added all the manufacturer (Lenovo) drivers and updates (using the utility provided by Lenovo) Tried with both 32 and 64 bits editions. The 32 bits one is performing a little better, but not very usable either. The hdd has enough space (20 GB or so) The problem is still present on a fresh install, so no recycle bin emptying or unistall programs (there aren't any except plain 7) would help. I'm not a newbie, so no obvious causes are left unchecked

    Read the article

  • Ubuntu 12.04 3.2 Kernel showing incorrect "cache size" in cpuinfo?

    - by Tom G
    2x Xeon E5620 . 16 Cores altogether. /proc/cpuinfo shows cache is only @ 4096kb According to intel this should have 12MB of "smart cache". Doing searched for E5620 and CPUinfo shows the correct number: cache size : 12288 KB However mine shows this: processor : 15 v endor_id : GenuineIntel cpu family : 6 model : 44 model name : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz stepping : 2 microcode : 0x1 cpu MHz : 2400.104 cache size : 4096 KB fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush mmx bogomips : 4800.20 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual Any idea on why it shows like it's missing 8MB in CPU cache? .

    Read the article

  • Which server requirment for a Redmine, Git and website hosting?

    - by Ephismen
    Me and 9 other students are going to start a project that will last a minimum of 2 years, for this purpose we are looking to host all our work on a server. Here are a few tools we would like to work with: Redmine GIT Hosting a website/blog to show our work Hosting an internal and private development website/blog We haven't decided yet which OS we will install, but we were looking toward Ubuntu or Fedora. Having a limited budget, 300$/year, we would like to have some advices on the following dedicated server specifications: Kimsufi 2G: Hardware: Intel Celeron/Atom, 1.20 Ghz, 64 bits, 2Gb DDR2, HDD 1 To, Backup FTP 100Gb Network: Connection 100 Mbps, Illimited trafic Dedibox SC: Hardware: Dell Nano U2250, 1x 1,6GHz, 64 bits, 2Gb DDR2, HDD 160 Gb Network: Connection 1Gbit/sec, Illimited trafic Will these server be sufficient? Should we host the websites on another platform? Would a virtualized server be more appropriate? Thank you for your answers, Ephismen.

    Read the article

  • Bsplayer - load audio tracks from external files

    - by torran
    I have a movie file: Video ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : [email protected] Format settings, CABAC : Yes Format settings, ReFrames : 5 frames Muxing mode : Container [email protected] Codec ID : V_MPEG4/ISO/AVC Duration : 54mn 13s Bit rate : 3 380 Kbps Nominal bit rate : 3 459 Kbps Width : 1 280 pixels Height : 720 pixels Display aspect ratio : 16:9 Frame rate : 23.976 fps Resolution : 8 bits Colorimetry : 4:2:0 Scan type : Progressive Bits/(Pixel*Frame) : 0.153 Stream size : 1.28 GiB (88%) Writing library : x264 core 88 r1471 1144615 Audio ID : 2 Format : AC-3 Format/Info : Audio Coding 3 Codec ID : A_AC3 Duration : 54mn 16s Bit rate mode : Constant Bit rate : 384 Kbps Channel(s) : 6 channels Channel positions : Front: L C R, Side: L R, LFE Sampling rate : 48.0 KHz Stream size : 149 MiB (10%) and additional audio files in same folder: .mp3 and .ac3. How can I load them with bsplayer? Right click-audio-audio streams is empty. If i open the movie with media players classic I can switch audio files.

    Read the article

  • Linux mdadm software RAID 6 - does it support bit corruption recovery?

    - by user101203
    Wikipedia says "RAID 2 is the only standard RAID level, other than some implementations of RAID 6, which can automatically recover accurate data from single-bit corruption in data." Does anyone know if the RAID 6 mdadm implementation in Linux is one such implementation that can automatically detect and recover from single-bit data corruption. This pertains to CentOS / Red Hat 6 if those are different from other versions. I tried searching online but didn't have much luck. With SATA error rates being 1 in 1E14 bits, and a 2TB SATA disk containing 1.6E13 bits, this is especially relevant to preventing data corruption. Thanks!

    Read the article

  • Use LibTIff in C# to convert from one tiff format to another

    - by Kevin
    I have a Tiff using JPEG format the WPF / C# can not handle via TiffBitmapDecoder. Our clients use the file format and our current C++ and Java code handles it. I need to convert this to a format I can display using TiffBitmapDecoder or standard BitmapImage. It looks like the C# version of LibTiff is the way to go but I am not having any luck converting in code. Here is my attempt - I always end up with corrupt files. ` Boolean doSystemLoad = false; Tiff tiff = null; try { tiff = Tiff.Open(file, "r"); } catch (Exception e) // TIFF could not handle, let OS do it { doSystemLoad = true; } if (tiff != null) { width = Double.Parse(tiff.GetField(TiffTag.IMAGEWIDTH)[0].Value.ToString()); height = Double.Parse(tiff.GetField(TiffTag.IMAGELENGTH)[0].Value.ToString()); int bits = Int32.Parse(tiff.GetField(TiffTag.BITSPERSAMPLE)[0].Value.ToString()); int samples = Int32.Parse(tiff.GetField(TiffTag.SAMPLESPERPIXEL)[0].Value.ToString()); string compression = tiff.GetField(TiffTag.COMPRESSION)[0].Value.ToString(); Console.WriteLine("Image is " + width + " x " + height + " bits " + bits + " sample " + samples); Console.WriteLine("Compression " + compression); // We allow OS to load anything that is not JPEG compression doSystemLoad = compression.ToLower().IndexOf("jpeg") == -1; string tempFile = Path.GetTempFileName() + ".tiff"; // Convert here then load converted via OS if (!doSystemLoad) { Console.WriteLine(">> Attempting to convert... " + tempFile); Console.WriteLine(" Scan line " + tiff.ScanlineSize()); Tiff tiffOut = Tiff.Open(tempFile, "w"); tiffOut.SetField(TiffTag.IMAGEWIDTH, width); tiffOut.SetField(TiffTag.IMAGELENGTH, height); tiffOut.SetField(TiffTag.BITSPERSAMPLE, bits); tiffOut.SetField(TiffTag.SAMPLESPERPIXEL, samples); tiffOut.SetField(TiffTag.ROWSPERSTRIP, 1L); tiffOut.SetField(TiffTag.COMPRESSION, Compression.NONE); tiffOut.SetField(TiffTag.ORIENTATION, BitMiracle.LibTiff.Classic.Orientation.TOPLEFT); tiffOut.SetField(TiffTag.FAXMODE, FaxMode.CLASSF); tiffOut.SetField(TiffTag.GROUP3OPTIONS, 5); tiffOut.SetField(TiffTag.PHOTOMETRIC, Photometric.RGB); tiffOut.SetField(TiffTag.FILLORDER, FillOrder.MSB2LSB); tiffOut.SetField(TiffTag.PLANARCONFIG, PlanarConfig.CONTIG); tiffOut.SetField(TiffTag.RESOLUTIONUNIT, ResUnit.INCH); tiffOut.SetField(TiffTag.XRESOLUTION, 100.0); tiffOut.SetField(TiffTag.YRESOLUTION, 100.0); tiffOut.SetField(TiffTag.SUBFILETYPE, FileType.PAGE); tiffOut.SetField(TiffTag.PAGENUMBER, new object[] { 1, 1 }); tiffOut.SetField(TiffTag.PAGENAME, "Page 1"); Byte[] scanLine = new Byte[tiff.ScanlineSize() + 5000]; for (int row = 0; row < height; row++) { tiff.ReadScanline(scanLine, row); tiffOut.WriteScanline(scanLine, row); } tiffOut.Dispose(); } tiff.Dispose(); Stream imageStreamSource = new FileStream(tempFile, FileMode.Open, FileAccess.Read, FileShare.Read); TiffBitmapDecoder decoder = new TiffBitmapDecoder(imageStreamSource, BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.Default); BitmapSource bitmapSource = decoder.Frames[0]; width = bitmapSource.Width; height = bitmapSource.Height; imageMain.Width = width; imageMain.Height = height; imageMain.Source = bitmapSource; } if (doSystemLoad) { Stream imageStreamSource = new FileStream(file, FileMode.Open, FileAccess.Read, FileShare.Read); TiffBitmapDecoder decoder = new TiffBitmapDecoder(imageStreamSource, BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.Default); BitmapSource bitmapSource = decoder.Frames[0]; width = bitmapSource.Width; height = bitmapSource.Height; imageMain.Width = width; imageMain.Height = height; imageMain.Source = bitmapSource; } `

    Read the article

  • Bewildering SegFault involving STL sort algorithm.

    - by just_wes
    Hello everybody, I am completely perplexed at a seg fault that I seem to be creating. I have: vector<unsigned int> words; and global variable string input; I define my custom compare function: bool wordncompare(unsigned int f, unsigned int s) { int n = k; while (((f < input.size()) && (s < input.size())) && (input[f] == input[s])) { if ((input[f] == ' ') && (--n == 0)) { return false; } f++; s++; } return true; } When I run the code: sort(words.begin(), words.end()); The program exits smoothly. However, when I run the code: sort(words.begin(), words.end(), wordncompare); I generate a SegFault deep within the STL. The GDB back-trace code looks like this: #0 0x00007ffff7b79893 in std::string::size() const () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/libstdc++.so.6 #1 0x0000000000400f3f in wordncompare (f=90, s=0) at text_gen2.cpp:40 #2 0x000000000040188d in std::__unguarded_linear_insert<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, unsigned int, bool (*)(unsigned int, unsigned int)> (__last=..., __val=90, __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:1735 #3 0x00000000004018df in std::__unguarded_insertion_sort<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, bool (*)(unsigned int, unsigned int)> (__first=..., __last=..., __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:1812 #4 0x0000000000402562 in std::__final_insertion_sort<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, bool (*)(unsigned int, unsigned int)> (__first=..., __last=..., __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:1845 #5 0x0000000000402c20 in std::sort<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, bool (*)(unsigned int, unsigned int)> (__first=..., __last=..., __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:4822 #6 0x00000000004012d2 in main (argc=1, args=0x7fffffffe0b8) at text_gen2.cpp:70 I have similar code in another program, but in that program I am using a vector instead of vector. For the life of me I can't figure out what I'm doing wrong. Thanks!

    Read the article

  • Using GA in GUI

    - by AlexT
    Sorry if this isn't clear as I'm writing this on a mobile device and I'm trying to make it quick. I've written a basic Genetic Algorithm with a binary encoding (genes) that builds a fitness value and evolves through several iterations using tournament selection, mutation and crossover. As a basic command-line example it seems to work. The problem I've got is with applying a genetic algorithm within a GUI as I am writing a maze-solving program that uses the GA to find a method through a maze. How do I turn my random binary encoded genes and fitness function (add all the binary values together) into a method to control a bot around a maze? I have built a basic GUI in Java consisting of a maze of labels (like a grid) with the available routes being in blue and the walls being in black. To reiterate my GA performs well and contains what any typical GA would (fitness method, get and set population, selection, crossover, etc) but now I need to plug it into a GUI to get my maze running. What needs to go where in order to get a bot that can move in different directions depending on what the GA says? Rough pseudocode would be great if possible As requested, an Individual is built using a separate class (Indiv), with all the main work being done in a Pop class. When a new individual is instantiated an array of ints represent the genes of said individual, with the genes being picked at random from a number between 0 and 1. The fitness function merely adds together the value of these genes and in the Pop class handles selection, mutation and crossover of two selected individuals. There's not much else to it, the command line program just shows evolution over n generations with the total fitness improving over each iteration. EDIT: It's starting to make a bit more sense now, although there are a few things that are bugging me... As Adamski has suggested I want to create an "Agent" with the options shown below. The problem I have is where the random bit string comes into play here. The agent knows where the walls are and has it laid out in a 4 bit string (i.e. 0111), but how does this affect the random 32 bit string? (i.e. 10001011011001001010011011010101) If I have the following maze (x is the start place, 2 is the goal, 1 is the wall): x 1 1 1 1 0 0 1 0 0 1 0 0 0 2 If I turn left I'm facing the wrong way and the agent will move completely off the maze if it moves forward. I assume that the first generation of the string will be completely random and it will evolve as the fitness grows but I don't get how the string will work within a maze. So, to get this straight... The fitness is the result of when the agent is able to move and is by a wall. The genes are a string of 32 bits, split into 16 sets of 2 bits to show the available actions and for the robot to move the two bits need to be passed with four bits from the agent showings its position near the walls. If the move is to go past a wall the move isn't made and it is deemed invalid and if the move is made and if a new wall is found then the fitness goes up. Is that right?

    Read the article

  • std::basic_string full specialization (g++ conflict)

    - by SoapBox
    I am trying to define a full specialization of std::basic_string< char, char_traits<char>, allocator<char> > which is typedef'd (in g++) by the <string> header. The problem is, if I include <string> first, g++ sees the typedef as an instantiation of basic_string and gives me errors. If I do my specialization first then I have no issues. I should be able to define my specialization after <string> is included. What do I have to do to be able to do that? My Code: #include <bits/localefwd.h> //#include <string> // <- uncommenting this line causes compilation to fail namespace std { template<> class basic_string< char, char_traits<char>, allocator<char> > { public: int blah() { return 42; } size_t size() { return 0; } const char *c_str() { return ""; } void reserve(int) {} void clear() {} }; } #include <string> #include <iostream> int main() { std::cout << std::string().blah() << std::endl; } The above code works fine. But, if I uncomment the first #include <string> line, I get the following compiler errors: blah.cpp:7: error: specialization of ‘std::basic_string<char, std::char_traits<char>, std::allocator<char> >’ after instantiation blah.cpp:7: error: redefinition of ‘class std::basic_string<char, std::char_traits<char>, std::allocator<char> >’ /usr/include/c++/4.4/bits/stringfwd.h:52: error: previous definition of ‘class std::basic_string<char, std::char_traits<char>, std::allocator<char> >’ blah.cpp: In function ‘int main()’: blah.cpp:22: error: ‘class std::string’ has no member named ‘blah’ Line 52 of /usr/include/c++/4.4/bits/stringfwd.h: template<typename _CharT, typename _Traits = char_traits<_CharT>, typename _Alloc = allocator<_CharT> > class basic_string; As far as I know this is just a forward delcaration of the template, NOT an instantiation as g++ claims. Line 56 of /usr/include/c++/4.4/bits/stringfwd.h: typedef basic_string<char> string; As far as I know this is just a typedef, NOT an instantiation either. So why are these lines conflicting with my code? What can I do to fix this other than ensuring that my code is always included before <string>?

    Read the article

  • Upgrading from TFS 2010 RC to TFS 2010 RTM done

    - by Martin Hinshelwood
    Today is the big day, with the Launch of Visual Studio 2010 already done in Asia, and rolling around the world towards us, we are getting ready for the RTM (Released). We have had TFS 2010 in Production for nearly 6 months and have had only minimal problems. Update 12th April 2010  – Added Scott Hanselman’s tweet about the MSDN download release time. SSW was the first company in the world outside of Microsoft to deploy Visual Studio 2010 Team Foundation Server to production, not once, but twice. I am hoping to make it 3 in a row, but with all the hype around the new version, and with it being a production release and not just a go-live, I think there will be a lot of competition. Developers: MSDN will be updated with #vs2010 downloads and details at 10am PST *today*! @shanselman - Scott Hanselman Same as before, we need to Uninstall 2010 RC and install 2010 RTM. The installer will take care of all the complexity of actually upgrading any schema changes. If you are upgrading from TFS 2008 to TFS2010 you can follow our Rules To Better TFS 2010 Migration and read my post on our successes.   We run TFS 2010 in a Hyper-V virtual environment, so we have the advantage of running a snapshot as well as taking a DB backup. Done - Snapshot the hyper-v server Microsoft does not support taking a snapshot of a running server, for very good reason, and Brian Harry wrote a post after my last upgrade with the reason why you should never snapshot a running server. Done - Uninstall Visual Studio Team Explorer 2010 RC You will need to uninstall all of the Visual Studio 2010 RC client bits that you have on the server. Done - Uninstall TFS 2010 RC Done - Install TFS 2010 RTM Done - Configure TFS 2010 RTM Pick the Upgrade option and point it at your existing “tfs_Configuration” database to load all of the existing settings Done - Upgrade the SharePoint Extensions Upgrade Build Servers (Pending) Test the server The back out plan, and you should always have one, is to restore the snapshot. Upgrading to Team Foundation Server 2010 – Done The first thing you need to do is off the TFS server and then log into the Hyper-v server and create a snapshot. Figure: Make sure you turn the server off and delete all old snapshots before you take a new one I noticed that the snapshot that was taken before the Beta 2 to RC upgrade was still there. You should really delete old snapshots before you create a new one, but in this case the SysAdmin (who is currently tucked up in bed) asked me not to. I guess he is worried about a developer messing up his server Turn your server on and wait for it to boot in anticipation of all the nice shiny RTM’ness that is coming next. The upgrade procedure for TFS2010 is to uninstal the old version and install the new one. Figure: Remove Visual Studio 2010 Team Foundation Server RC from the system.   Figure: Most of the heavy lifting is done by the Uninstaller, but make sure you have removed any of the client bits first. Specifically Visual Studio 2010 or Team Explorer 2010.  Once the uninstall is complete, this took around 5 minutes for me, you can begin the install of the RTM. Running the 64 bit OS will allow the application to use more than 2GB RAM, which while not common may be of use in heavy load situations. Figure: It is always recommended to install the 64bit version of a server application where possible. I do not think it is likely, with SharePoint 2010 and Exchange 2010  and even Windows Server 2008 R2 being 64 bit only, I do not think there will be another release of a server app that is 32bit. You then need to choose what it is you want to install. This depends on how you are running TFS and on how many servers. In our case we run TFS and the Team Foundation Build Service (controller only) on out TFS server along with Analysis services and Reporting Services. But our SharePoint server lives elsewhere. Figure: This always confuses people, but in reality it makes sense. Don’t install what you do not need. Every extra you install has an impact of performance. If you are integrating with SharePoint you will need to run this install on every Front end server in your farm and don’t forget to upgrade your Build servers and proxy servers later. Figure: Selecting only Team Foundation Server (TFS) and Team Foundation Build Services (TFBS)   It is worth noting that if you have a lot of builds kicking off, and hence a lot of get operations against your TFS server, you can use a proxy server to cache the source control on another server in between your TFS server and your build servers. Figure: Installing Microsoft .NET Framework 4 takes the most time. Figure: Now run Windows Update, and SSW Diagnostic to make sure all your bits and bobs are up to date. Note: SSW Diagnostic will check your Power Tools, Add-on’s, Check in Policies and other bits as well. Configure Team Foundation Server 2010 – Done Now you can configure the server. If you have no key you will need to pick “Install a Trial Licence”, but it is only £500, or free with a MSDN subscription. Anyway, if you pick Trial you get 90 days to get your key. Figure: You can pick trial and add your key later using the TFS Server Admin. Here is where the real choices happen. We are doing an Upgrade from a previous version, so I will pick Upgrade the same as all you folks that are using the RC or TFS 2008. Figure: The upgrade wizard takes your existing 2010 or 2008 databases and upgraded them to the release.   Once you have entered your database server name you can click “List available databases” and it will show what it can upgrade. Figure: Select your database from the list and at this point, make sure you have a valid backup. At this point you have not made ANY changes to the databases. At this point the configuration wizard will load configuration from your existing database if you have one. If you are upgrading TFS 2008 refer to Rules To Better TFS 2010 Migration. Mostly during the wizard the default values will suffice, but depending on the configuration you want you can pick different options. Figure: Set the application tier account and Authentication method to use. We use NTLM to keep things simple as we host our TFS server externally for our remote developers.  Figure: Setting your TFS server URL’s to be the remote URL’s allows the reports to be accessed without using VPN. Very handy for those remote developers. Figure: Detected the existing Warehouse no problem. Figure: Again we love green ticks. It gives us a warm fuzzy feeling. Figure: The username for connecting to Reporting services should be a domain account (if you are on a domain that is). Figure: Setup the SharePoint integration to connect to your external SharePoint server. You can take the option to connect later.   You then need to run all of your readiness checks. These check can save your life! it will check all of the settings that you have entered as well as checking all the external services are configures and running properly. There are two reasons that TFS 2010 is so easy and painless to install where previous version were not. Microsoft changes the install to two steps, Install and configuration. The second reason is that they have pulled out all of the stops in making the install run all the checks necessary to make sure that once you start the install that it will complete. if you find any errors I recommend that you report them on http://connect.microsoft.com so everyone can benefit from your misery.   Figure: Now we have everything setup the configuration wizard can do its work.  Figure: Took a while on the “Web site” stage for some point, but zipped though after that.  Figure: last wee bit. TFS Needs to do a little tinkering with the data to complete the upgrade. Figure: All upgraded. I am not worried about the yellow triangle as SharePoint was being a little silly Exception Message: TF254021: The account name or password that you specified is not valid. (type TfsAdminException) Exception Stack Trace:    at Microsoft.TeamFoundation.Management.Controls.WizardCommon.AccountSelectionControl.TestLogon(String connectionString)    at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) [Info   @16:10:16.307] Benign exception caught as part of verify: Exception Message: TF255329: The following site could not be accessed: http://projects.ssw.com.au/. The server that you specified did not return the expected response. Either you have not installed the Team Foundation Server Extensions for SharePoint Products on this server, or a firewall is blocking access to the specified site or the SharePoint Central Administration site. For more information, see the Microsoft Web site (http://go.microsoft.com/fwlink/?LinkId=161206). (type TeamFoundationServerException) Exception Stack Trace:    at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.VerifyTeamFoundationSharePointExtensions(ICredentials credentials, Uri url)    at Microsoft.TeamFoundation.Admin.VerifySharePointSitesUrl.Verify() Inner Exception Details: Exception Message: TF249064: The following Web service returned an response that is not valid: http://projects.ssw.com.au/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. Either the extensions are not installed, the request resulted in HTML being returned, or there is a problem with the URL. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://projects.ssw.com.au. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServerInvalidResponseException) Exception Data Dictionary: ResponseStatusCode = InternalServerError I’ll look at SharePoint after, probably the SharePoint box just needs a restart or a kick If there is a problem with SharePoint it will come out in testing, But I will definatly be passing this on to Microsoft.   Upgrading the SharePoint connector to TFS 2010 You will need to upgrade the Extensions for SharePoint Products and Technologies on all of your SharePoint farm front end servers. To do this uninstall  the TFS 2010 RC from it in the same way as the server, and then install just the RTM Extensions. Figure: Only install the SharePoint Extensions on your SharePoint front end servers. TFS 2010 supports both SharePoint 2007 and SharePoint 2010.   Figure: When you configure SharePoint it uploads all of the solutions and templates. Figure: Everything is uploaded Successfully. Figure: TFS even remembered the settings from the previous installation, fantastic.   Upgrading the Team Foundation Build Servers to TFS 2010 Just like on the SharePoint servers you will need to upgrade the Build Server to the RTM. Just uninstall TFS 2010 RC and then install only the Team Foundation Build Services component. Unlike on the SharePoint server you will probably have some version of Visual Studio installed. You will need to remove this as well. (Coming Soon) Connecting Visual Studio 2010 / 2008 / 2005 and Eclipse to TFS2010 If you have developers still on Visual Studio 2005 or 2008 you will need do download the respective compatibility pack: Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 If you are using Eclipse you can download the new Team Explorer Everywhere install for connecting to TFS. Get your developers to check that you have the latest version of your applications with SSW Diagnostic which will check for Service Packs and hot fixes to Visual Studio as well.   Technorati Tags: TFS,TFS2010,TFS 2010,Upgrade

    Read the article

  • How do I install a driver for an Atheros AR9285?

    - by Fernando
    How to install the driver for Atheros AR9285 in Ubuntu 11.10. Still no package for 11.10 according to this https://help.ubuntu.com/community/WifiDocs/Device/Atheros/AR9285 Here is the output of the commands marc@fer-VPCYA1V9E:~$ sudo lshw -class network *-network DISABLED description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 01 serial: 4c:0f:6e:d6:65:cc width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.0.0-12-generic firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:16 memory:d3400000-d340ffff *-network description: Ethernet interface product: AR8131 Gigabit Ethernet vendor: Atheros Communications physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: c0 serial: 54:42:49:a2:1f:bc capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI firmware=N/A latency=0 link=no multicast=yes port=twisted pair resources: irq:43 memory:d2400000-d243ffff ioport:1000(size=128) And the second command marc@fer-VPCYA1V9E:~$ rfkill list 0: sony-wifi: Wireless LAN Soft blocked: no Hard blocked: no 1: sony-bluetooth: Bluetooth Soft blocked: no Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no 3: hci0: Bluetooth Soft blocked: no Hard blocked: no 4: acer-wireless: Wireless LAN Soft blocked: yes Hard blocked: no Is there a way to make it work?

    Read the article

  • Enterprise SharePoint 2010 Hosting, SharePoint Foundation 2010 Hosting, SharePoint Standard 2010 Hos

    - by Michael J. Hamilton, Sr.
    Enterprise SharePoint 2010 Hosting, SharePoint Foundation 2010 Hosting, SharePoint Standard 2010 Hosting, Michigan Sclera, a Microsoft Hosted Services Provider Partner, is offering key Service Offerings around the Microsoft SharePoint Server 2010 stack. Specifically – if you’re looking for SharePoint Foundation, SharePoint Standard or Enterprise 2010 hosting provisions, checkout the Service Offerings from Sclera Hosting (www.sclerahosting.com) and compare with some of the lowest prices available on the web today. I wanted to post this so you could shot around and compare. There are a couple of the larger on demand hosting agencies (247hosting, and fpweb hosting) – that charge outrageous fees  - like $350 a month for SharePoint Foundation 2010 hosting. The most incredible part? This is on a shared domain name – not the client’s domain. It’s hosting on something like .sharepointsites.com">.sharepointsites.com">http://<yourSiteName>.sharepointsites.com – or something crazy like that. Sclera Hosting provides you on demand – SharePoint Foundation, SharePoint Server Standard/Enterprise – 2010 RTM bits – within minutes of your order – ON YOUR DOMAIN – and that is a major perk for me. You have complete SharePoint Designer 2010 integration; complete support for custom assemblies, web parts, you name it – this hosting provider gives you more bang for buck than any provider on the Net today. Now – some teasers – I was in a meeting this week and I heard – SharePoint Foundation – 2010 RTM bits – unlimited users, 10 GB content database quota, full SharePoint Designer 2010 integration/support, all on the client’s domain – sit down and soak this up - $175.00 per month – no kidding. Now, I do not know about you – but – I have not seen a deal like that EVER on the Net – so – get over to www.sclerahosting.com – or email the Sales Team at Sclera Design, Inc. today for more details. Have a great weekend!

    Read the article

  • Wireless doesn't work on a Lenovo V570

    - by Stephen
    I've had Ubuntu installed on my HD for about 3 months but ever since I ran into this wireless issue I kinda lost my lust of Ubuntu. I have zero experience getting around with/ using the console command. I have a Lenovo V570. I got the driver update for the broadcom networking card via the Additional Drivers application but that did nothing. I love the look and feel of using Ubuntu but I have no technological experience for the matter. Any help would be awesome. When I scan for wireless connections while in Ubuntu, my computer picks up nothing, while on Win7 it will pick up the handful of wireless networks around my area. My wired connection is fine, but the use of not having wireless on a laptop is rather contradictory to it as a feature. Cheers! Also, I just installed 11.10, if that helps any. Yes, I used the search before I posted this, but again I have ZERO understanding of the command stuff and need a meat and potatoes answer(s). stephen@ubuntu:~$ sudo lshw -class network [sudo] password for stephen: *-network UNCLAIMED description: Network controller product: BCM4313 802.11b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:03:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f1900000-f1903fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:04:00.0 logical name: eth0 version: 06 serial: f0:de:f1:63:98:14 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw ip=192.168.1.78 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:2000(size=256) memory:f1804000-f1804fff memory:f1800000-f1803fff stephen@ubuntu:~$ rfkill list all 0: ideapad_wlan: Wireless LAN Soft blocked: yes Hard blocked: no 1: acer-wireless: Wireless LAN Soft blocked: yes Hard blocked: no

    Read the article

  • Confusion of the "stack" in Assembly-level programming

    - by Bigyellow Bastion
    What is the "stack" exactly? I've read articles, tried comprehending it through my understanding, experience, and educated guessing of programming and computers, but I'm a bit perplexed here. The "stack" is a region in RAM? Or is it some other space I'm uncertain of here? The processor pushes bits through registers on to the stack in RAM, or do I have it wrong here? Also, the processor moves the bits from the RAM to the register to "process" it, such as maybe a compare, arithmetic, etc. But what actually can help understand, in some visual or verbal description or both, of how to implement the idea of a "stack" here? Is the stack actually the same in terminology with a "machine stack" meaning it's in RAM? I'm sorry, I don't want to solicit debate or arguments, but I really could use some help here if anyone can straighten things out. TO ADD: I know what a software stack is. I know about LIFO, FIFO, etc. I just want to gain a better understanding of the Assembly-level stack, what it is, where it is, how exactly it works, etc. Thanks for reading!

    Read the article

  • Creating a JSONP Formatter for ASP.NET Web API

    - by Rick Strahl
    Out of the box ASP.NET WebAPI does not include a JSONP formatter, but it's actually very easy to create a custom formatter that implements this functionality. JSONP is one way to allow Browser based JavaScript client applications to bypass cross-site scripting limitations and serve data from the non-current Web server. AJAX in Web Applications uses the XmlHttp object which by default doesn't allow access to remote domains. There are number of ways around this limitation <script> tag loading and JSONP is one of the easiest and semi-official ways that you can do this. JSONP works by combining JSON data and wrapping it into a function call that is executed when the JSONP data is returned. If you use a tool like jQUery it's extremely easy to access JSONP content. Imagine that you have a URL like this: http://RemoteDomain/aspnetWebApi/albums which on an HTTP GET serves some data - in this case an array of record albums. This URL is always directly accessible from an AJAX request if the URL is on the same domain as the parent request. However, if that URL lives on a separate server it won't be easily accessible to an AJAX request. Now, if  the server can serve up JSONP this data can be accessed cross domain from a browser client. Using jQuery it's really easy to retrieve the same data with JSONP:function getAlbums() { $.getJSON("http://remotedomain/aspnetWebApi/albums?callback=?",null, function (albums) { alert(albums.length); }); } The resulting callback the same as if the call was to a local server when the data is returned. jQuery deserializes the data and feeds it into the method. Here the array is received and I simply echo back the number of items returned. From here your app is ready to use the data as needed. This all works fine - as long as the server can serve the data with JSONP. What does JSONP look like? JSONP is a pretty simple 'protocol'. All it does is wrap a JSON response with a JavaScript function call. The above result from the JSONP call looks like this:Query17103401925975181569_1333408916499( [{"Id":"34043957","AlbumName":"Dirty Deeds Done Dirt Cheap",…},{…}] ) The way JSONP works is that the client (jQuery in this case) sends of the request, receives the response and evals it. The eval basically executes the function and deserializes the JSON inside of the function. It's actually a little more complex for the framework that does this, but that's the gist of what happens. JSONP works by executing the code that gets returned from the JSONP call. JSONP and ASP.NET Web API As mentioned previously, JSONP support is not natively in the box with ASP.NET Web API. But it's pretty easy to create and plug-in a custom formatter that provides this functionality. The following code is based on Christian Weyers example but has been updated to the latest Web API CodePlex bits, which changes the implementation a bit due to the way dependent objects are exposed differently in the latest builds. Here's the code:  using System; using System.IO; using System.Net; using System.Net.Http.Formatting; using System.Net.Http.Headers; using System.Threading.Tasks; using System.Web; using System.Net.Http; namespace Westwind.Web.WebApi { /// <summary> /// Handles JsonP requests when requests are fired with /// text/javascript or application/json and contain /// a callback= (configurable) query string parameter /// /// Based on Christian Weyers implementation /// https://github.com/thinktecture/Thinktecture.Web.Http/blob/master/Thinktecture.Web.Http/Formatters/JsonpFormatter.cs /// </summary> public class JsonpFormatter : JsonMediaTypeFormatter { public JsonpFormatter() { SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/json")); SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/javascript")); //MediaTypeMappings.Add(new UriPathExtensionMapping("jsonp", "application/json")); JsonpParameterName = "callback"; } /// <summary> /// Name of the query string parameter to look for /// the jsonp function name /// </summary> public string JsonpParameterName {get; set; } /// <summary> /// Captured name of the Jsonp function that the JSON call /// is wrapped in. Set in GetPerRequestFormatter Instance /// </summary> private string JsonpCallbackFunction; public override bool CanWriteType(Type type) { return true; } /// <summary> /// Override this method to capture the Request object /// and look for the query string parameter and /// create a new instance of this formatter. /// /// This is the only place in a formatter where the /// Request object is available. /// </summary> /// <param name="type"></param> /// <param name="request"></param> /// <param name="mediaType"></param> /// <returns></returns> public override MediaTypeFormatter GetPerRequestFormatterInstance(Type type, HttpRequestMessage request, MediaTypeHeaderValue mediaType) { var formatter = new JsonpFormatter() { JsonpCallbackFunction = GetJsonCallbackFunction(request) }; return formatter; } /// <summary> /// Override to wrap existing JSON result with the /// JSONP function call /// </summary> /// <param name="type"></param> /// <param name="value"></param> /// <param name="stream"></param> /// <param name="contentHeaders"></param> /// <param name="transportContext"></param> /// <returns></returns> public override Task WriteToStreamAsync(Type type, object value, Stream stream, HttpContentHeaders contentHeaders, TransportContext transportContext) { if (!string.IsNullOrEmpty(JsonpCallbackFunction)) { return Task.Factory.StartNew(() => { var writer = new StreamWriter(stream); writer.Write( JsonpCallbackFunction + "("); writer.Flush(); base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext).Wait(); writer.Write(")"); writer.Flush(); }); } else { return base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext); } } /// <summary> /// Retrieves the Jsonp Callback function /// from the query string /// </summary> /// <returns></returns> private string GetJsonCallbackFunction(HttpRequestMessage request) { if (request.Method != HttpMethod.Get) return null; var query = HttpUtility.ParseQueryString(request.RequestUri.Query); var queryVal = query[this.JsonpParameterName]; if (string.IsNullOrEmpty(queryVal)) return null; return queryVal; } } } Note again that this code will not work with the Beta bits of Web API - it works only with post beta bits from CodePlex and hopefully this will continue to work until RTM :-) This code is a bit different from Christians original code as the API has changed. The biggest change is that the Read/Write functions no longer receive a global context object that gives access to the Request and Response objects as the older bits did. Instead you now have to override the GetPerRequestFormatterInstance() method, which receives the Request as a parameter. You can capture the Request there, or use the request to pick up the values you need and store them on the formatter. Note that I also have to create a new instance of the formatter since I'm storing request specific state on the instance (information whether the callback= querystring is present) so I return a new instance of this formatter. Other than that the code should be straight forward: The code basically writes out the function pre- and post-amble and the defers to the base stream to retrieve the JSON to wrap the function call into. The code uses the Async APIs to write this data out (this will take some getting used to seeing all over the place for me). Hooking up the JsonpFormatter Once you've created a formatter, it has to be added to the request processing sequence by adding it to the formatter collection. Web API is configured via the static GlobalConfiguration object.  protected void Application_Start(object sender, EventArgs e) { // Verb Routing RouteTable.Routes.MapHttpRoute( name: "AlbumsVerbs", routeTemplate: "albums/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi" } ); GlobalConfiguration .Configuration .Formatters .Insert(0, new Westwind.Web.WebApi.JsonpFormatter()); }   That's all it takes. Note that I added the formatter at the top of the list of formatters, rather than adding it to the end which is required. The JSONP formatter needs to fire before any other JSON formatter since it relies on the JSON formatter to encode the actual JSON data. If you reverse the order the JSONP output never shows up. So, in general when adding new formatters also try to be aware of the order of the formatters as they are added. Resources JsonpFormatter Code on GitHub© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >