Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 754/1338 | < Previous Page | 750 751 752 753 754 755 756 757 758 759 760 761  | Next Page >

  • Are the only types of data "sources" static and dynamic?

    - by blunders
    Thinking that there might be others, but not sure -- but before getting into that, let me explain what I mean by static and dynamic data sources. Static (or datastore) - Meaning that the data's state is non-changing, and if was changed, that would be a new state, and the old data would be considered stateless; meaning it no longer is known to exist, or not exist. Another way of possibly looking at a static data source might be that if read and written back without modification, the checksum for before and after should be exactly the same regardless of the duration of time between the reading and rewriting of the data. Examples: Photos, Files, Database Record, Dynamic (or datastream) - Meaning that the data's state is known to be in flux, and never expected to be the same per input. Example: Live video/audio feed, Stock Market feed, First let me say, the above is a very loose mapping of the concepts, and I'd welcome any feedback. Next, onto the core of the question, that being are these the only two types of data sources. My guess, is that yes, they are -- but that there are hybrid versions of the two. That being, streaming data that has a fixed state. For example, the data being streamed has a checksum given and each unique checksum is known to be a single instance of static data. On the flip side, static data could be chained via say a version control system; when played back, each version might be viewed as a segment of a stream; thing is, the very fact that it can be played back makes the data source static. Another type might be that the data source is being organically discovered, and it's simply unknown what the state is. Questions, feedback, requests -- just comment, thanks!!

    Read the article

  • How do I keep controversy in check?

    - by Aaron Digulla
    This is probably OT but it's less OT here than on any other SO site, so please bear with me. I'm working on a new project votEm. The goal is to give independent candidates a platform to introduce themselves to get elected for a political office. My main reason is that today, it's too expensive to run for an office. Some politicians in the US spend as much as 30 million dollars (!) for a single campaign. That money is better spent elsewhere. In a similar fashion, people who want to change countries like Egypt, could use such a platform to present themselves. Now I expect a lot of emotions and pressure on my site. People with a lot of money (and a lot to lose) will try to game it (political parties, secret services of ... errr ... "not 100% democratic countries", big companies, ...) To avoid as many mistakes as possible, I need a list of resources, ideas and tips how to keep such a site out of too much trouble. PS: I'd make this CW but the option seems to be gone...

    Read the article

  • Configuring BitLocker on existing Windows Server 2008

    - by neildeadman
    On our file server, we want to enable BitLocker so that we can encrypt a single drive and I know that we have to also encrypt the drive the system is installed on. In my tests with VMs, I have to create a partition on the same disk as Windows is installed either before I install Windows or after (I can use the BitLocker Drive Preparation Tool). This tool shrinks the C: drive to make way for a 1.5GB partition. I'm a little concerned about doing this with a live system so wanted to see if it was possible to get BitLocker to use a partition on another drive?

    Read the article

  • Slap the App on the VM for every private cloud solution! Really ?

    - by Anand Akela
    One of the key attractions of the general session "Managing Enterprise Private Cloud" at Oracle OpenWorld 2012 was an interactive role play depicting how to address some of the key challenges of planning, deploying and managing an enterprise private cloud. It was a face-off between Don DeVM, IT manager at a fictitious enterprise 'Vulcan' and Ed Muntz, the Enterprise Manager hero .   Don DeVM is really excited about the efficiency and cost savings from virtualization. The success he enjoyed from the infrastructure virtualization made him believe that for all cloud service delivery models ( database, testing or applications as-a-service ), he has a single solution - slap the app on the VM and here you go . However, Ed Muntz believes in delivering cloud services that allows the business units and enterprise users to manage the complete lifecycle of the cloud services they are providing, for example, setting up cloud, provisioning it to users through a self-service portal ,  managing and tuning the performance, monitoring and applying patches for database or applications. Watch the video of the face-off , see how Don and Ed address some of the key challenges of planning, deploying and managing an enterprise private cloud and be the judge ! ?

    Read the article

  • How can I get ssh-agent working over ssh and in tmux (on OS X)?

    - by Rich
    I have a private key set up for my github account, the passphrase to which is, I believe, stored in OS X's keychain. I certainly don't have to type it in when I open a terminal window and enter ssh [email protected]. However, when I'm running bash over an ssh session, or locally inside a tmux session, I have to type in the passphrase every single time I attempt to ssh to github. This question suggests that a similar problem exists with screen, but I don't really understand the issue well enough to fix it in tmux. There's also this page which includes a fairly complicated solution, but for zsh. EDIT: In response to @Mikel's answer, from a local terminal I get the following output: [~] $ echo $SSH_AUTH_SOCK /tmp/launch-S4HBD6/Listeners [~] $ ssh-add -l 2048 [my key fingerprint] /Users/richie/.ssh/id_rsa (RSA) [~] $ typeset -p SSH_AUTH_SOCK declare -x SSH_AUTH_SOCK="/tmp/launch-S4HBD6/Listeners" Whereas over ssh or in tmux I get: [~] $ echo $SSH_AUTH_SOCK [~] $ ssh-add -l Could not open a connection to your authentication agent. [~] $ typeset -p SSH_AUTH_SOCK bash: typeset: SSH_AUTH_SOCK: not found echo $SSH_AGENT_PID returns nothing whatever shell I run it from.

    Read the article

  • I want Lotus Domino to only send one email to users that are both recipients and members of a cc'ed lotus group.

    - by Marcus
    Lotus Domino 7 and now Lotus Domino 8.5 The scenario: A@mycompany writes an email to b@internet and cc's it to group@mycompany. A@mycompany is a member of group@mycompany. With the initial email Domino is intelligent enough to not send the email which a@mycompany just wrote to a@mycompany again. But when b@internet answers to all (a@mycompany + group@mycompany) then a@mycompany gets this email twice, because he is not only the author but also a member of group@mycompany. During the smtp session the email is sent once with the recipients set to a@mycompany and group@mycompany and a single esmtp id. So Domino should well be able to see that the mail should only be sent to a@mycompany once. Can I make Lotus Domino behave in this sane fashion?

    Read the article

  • Timeouts when connecting to SQL Server since installing SP1 for Windows 7

    - by Julien
    Hi, I just installed SP1 for windows 7 and I have severe performance degradation when connecting to SQL Server 2005 since then. Establishing connection takes more than 30 seconds while it's instantaneous on another computer. Firewall is disabled and I didn't make any change to the configuration. It happens both when trying to connect with a hostname and with an ip address. Everything else seems to be fine (for instance, I'm have no issue connecting to other computers with remote desktop) What can cause such a problem? Thanks in advance! Edit : uninstalling the SP1 solves the issue instantly.

    Read the article

  • How to rename everything matching a certain string in a folder

    - by lostiniceland
    Hello Everyone I am running Linux and I have some basic console knowledge but my current problem is quite difficult and I dont know how to achieve this. I want/need to rename everything within a folder that matches a given string. By everything I mean folders/files content within a file content in hidden files Basically I want to refactor a Java-project. Sure, I could use Eclipse to handle the replacing, but this leaves out the folders or resources outside of my workspace. I was thinking of a script that could do the job for me but this seems rather tricky. For instance when it comes to folder-/file-rename I want to replace only the part of the name that matches my string, the rest should remain untouched. Maybe someone already has something like this in his/her script-collection :-) Thanks in advance Marc

    Read the article

  • script Disk Management configuration

    - by Joseph
    I have 10 workstations with large monitors that have USB slots and several card readers built in. The card readers cannot be disabled and will map to drive letters when I image the computers. I go into Disk Management and delete the drive mappings and add mappings to a single folder in C:\ with a folder for each slot. I have to do this because of scripts that run that are expecting specific letter drive mappings to network resources. Is there a way to script the deleting and adding of drive mappings instead of having to use the Disk Management GUI manually on each workstation? The workstations are running XP Professional.

    Read the article

  • How would I batch rename a lot of files using command-line?

    - by Whisperity
    I have a problem which I am unable to solve: I need to rename a great dump of files using patterns. I tried using this, but I always get an error. I have a folder, inside with a lot of files. Running ls -1 | wc -l, it returns that I have like 160000 files inside. The problem is, that I wish to move these files to a Windows system, but most of them have characters like : and ? in them, which makes the file unaccessible on said Windows-based systems. (As a "do not solve but deal with" method, I tried booting up a LiveCD on the Windows system and moving the files using the live OS. Under that Ubuntu, the files were readable and writable on the mounted NTFS partition, but when I booted back on Windows, it showed that the file is there but Windows was unable to access it in any fashion: rename, delete or open.) I tried running rename 's/\:/_' * inside the folder, but I got Argument list too long error. Some search revealed that it happens because I have so many files, and then I arrived here. The problem is that I don't know how to alter the command to suit my needs, as I always end up having various errors like Trying find -name '*:*' | xargs rename : _, it gives xargs: unmatched single quote; by default quotes are special to xargs unless you use the -0 option [\n] syntax error at (eval 1) line 1, near ":" [\n] xargs: rename: exited with status 255; aborting Adding the -0 after xargs turns the error message to xargs: argument line too long These files are archive files generated by various PHP scripts. The best solution would be having a chance to rename them before they are moved to Windows, but if there is no way to do it, we might have a way to rename the files while they are moved to Windows. I use samba and proftpd to move the files. Unfortunately, graphical software are out of the question as the server containing the files is what it is, a server, with only command-line interface.

    Read the article

  • Multiplayer online game engine/pipeline

    - by Slav
    I am implementing online multiplayer game where client must be written in AS3 (Flash) to embed game into browser and server in C++ (abstract part of which is already written and used with other games). Networking models may differ from each other, but currently I'm looking toward game's logic run on both client and server parts but they're written on different languages while it's not the main problem. My previous game (pretty big one - was implemented with efforts of ~5 programmers in 1.5 years) was mainly "written" within electronic tables as structured objects with implemented inheritance: was written standalone tool which generated AS3 and C++ (languages of platforms to which the game was published) using specified electronic tables file (.xls or .ods). That file contained ~50 tables with ~50 rows and ~50 columns each and was mainly written by game designers which do not know any programming languages. But that game was single-player. Having declared problem with my currently implementing MMO, I'm looking toward some vast pipeline, where will be resolved such problems like: game objects descriptions (which starships exist within game, how much HP they have, how fast move, what damage deal...) actions descriptions (what players or NPCs can do: attack each other, collect resources, build structures, move, teleport, cast spells) - actions are transmitted through server between clients influences (what happens when specified action applied on specified object, e.i "Ship A attacked Ship B: field "HP" of Ship B reduced by amount of field "damage" of Ship A" Influences can be much more difficult, yes, e.i. "damage is twice it's size when Ship has =5 allies around him in a 200 units range during night" and so on. If to be able to write such logic within some "design document" it will be easily possible to: let designers to do their job without programmer's intervention or any bug-prone programming validate described logic transfer (transform, convert) to any programming language where it will be executed Did somebody worked on something like that? Is there some tools/engines/pipelines which concernes with it? How to handle all of this problems simultaneously in a best way or do I properly imagine my tasks and problems to myself?

    Read the article

  • How to collaborate on features using github

    - by Robert Dailey
    github encourages 1 fork per user, so that that user can work independently on a feature and then request that feature to be accepted into the main repository via pull request. However, what if 2 developers need to collaborate on that feature? What is the ideal workflow for this? I could see a number of options: Both developers fork the original repository. Each developer pulls/pushes changes between each other's repository. This seems like a lot of work (tiny micro operations) and also creates a delay between changes, so increases the window for conflicts. Developer 1 forks from the main repository, developer 2 forks from developer 1. Same as #1 mainly but hopefully simplifies Developer 2's life a little? Developer 1 gives Developer 2 permissions to his own fork, so they both work out of the same central repository. Not sure if this is ideal. I'm also curious where branches come into this. Obviously there would be a branch for the feature itself but that branch can't exist in a single place, it would have to exist on multiple forks and be synchronized. Basically just really confused about this workflow, would like an approach for how this can be best accomplished.

    Read the article

  • how can I pass an environment variable through an ssh command?

    - by Ross Rogers
    How can I pass a value into an ssh command, such that the environment that is started on the host machine starts with a certain environment variable set to my choosing? EDIT: The goal is to pass the current kde desktop ( from dcop kwin KWinInterface currentDesktop ) to the new shell created so that I can pass back an nfs locations to my JEdit instance on the original server which is unique for each KDE desktop. ( Using a mechanism like emacsserver/emacsclient) The reason multiples ssh instances can be in flight at one time is because when I'm setting up my environment, I'm opening a bunch of different ssh instances to different machines.

    Read the article

  • How to calculate bandwidth limits per user on WiFi network

    - by Lars
    A typical 802.11g access point can provide around 25 Mbps of bandwidth. How is the bandwidth shared among the users? Furthermore, how many users can be served by a single access point using 802.11g in an environment with low interference, and average web activity from the users? The goal is to use bandwidth limitation to avoid starvation for some users in case some of the users start to download a file or stream HD video or some other bandwidth intensive activity. Can someone break down the math on this?

    Read the article

  • Modular Database Structures

    - by John D
    I have been examining the code base we use in work and I am worried about the size the packages have grown to. The actual code is modular, procedures have been broken down into small functional (and testable) parts. The issue I see is that we have 100 procedures in a single package - almost an entire domain model. I had thought of breaking these packages down - to create sub domains that are centered around the procedure relationships to other objects. Group a bunch of procedures that have 80% of their relationships to three tables etc. The end result would be a lot more packages, but the packages would be smaller and I feel the entire code base would be more readable - when procedures cross between two domain models it is less of a struggle to figure which package it belongs to. The problem I now have is what the actual benefit of all this would really be. I looked at the general advantages of modularity: 1. Re-usability 2. Asynchronous Development 3. Maintainability Yet when I consider our latest development, the procedures within the packages are already reusable. At this advanced stage we rarely require asynchronous development - and when it is required we simply ladder the stories across iterations. So I guess my question is if people know of reasons why you would break down classes rather than just the methods inside of classes? Right now I do believe there is an issue with these mega packages forming but the only benefit I can really pin down to break them down is readability - something that experience gained from working with them would solve.

    Read the article

  • Duplicate xserver

    - by hariks0
    When I start my ubuntu pc, just before my login, I receive a message that says "An xserver is already started in display :0. Would you like to try starting xserver in another virtual workspace?". Now If I answer 'No' the message appears again and again. If I say 'Yes' It goes to gnome desktop in virtual desktop 9 [Ctrl+Alt+F9] It was previously available only in [Ctrl+Alt+F7] but now available in both F7 and F9. Now I assume that there are two instances of xserver both in [Ctrl+Alt+F7] and [Ctrl+Alt+F9]. I think by killing one instance of xserver manaually I can disable the message. How do I do this? Thanks in advance.

    Read the article

  • How to invert scroll wheel in certain applications using AutoHotkey?

    - by endolith
    I want to be able to modify the scrolling/middle click behavior for individual apps on Windows 7, so that the scroll to zoom direction is always consistent across apps. This script makes the middle button act as a hand tool in Adobe Acrobat, for instance: ; Hand tool with middle button in Adobe Reader #IfWinActive ahk_class AdobeAcrobat Mbutton:: #IfWinActive ahk_class AcrobatSDIWindow Mbutton:: Send {Space down}{LButton down} ; Hold down the left mouse button. KeyWait Mbutton ; Wait for the user to release the middle button. Send {LButton up}{Space up} ; Release the left mouse button. return #IfWinActive (It would be great if this could be adapted to allow "throwing" the document, too, like in Android or iPhone interfaces, but I don't know if it's possible to control scrolling that precisely) How do I invert the scroll wheel--zoom direction?

    Read the article

  • Master Data Management – A Foundation for Big Data Analysis

    - by Manouj Tahiliani
    While Master Data Management has crossed the proverbial chasm and is on its way to becoming mainstream, businesses are being hammered by a new megatrend called Big Data. Big Data is characterized by massive volumes, its high frequency, the variety of less structured data sources such as email, sensors, smart meters, social networks, and Weblogs, and the need to analyze vast amounts of data to determine value to improve upon management decisions. Businesses that have embraced MDM to get a single, enriched and unified view of Master data by resolving semantic discrepancies and augmenting the explicit master data information from within the enterprise with implicit data from outside the enterprise like social profiles will have a leg up in embracing Big Data solutions. This is especially true for large and medium-sized businesses in industries like Retail, Communications, Financial Services, etc that would find it very challenging to get comprehensive analytical coverage and derive long-term success without resolving the limitations of the heterogeneous topology that leads to disparate, fragmented and incomplete master data. For analytical success from Big Data or in other words ROI from Big Data Investments, businesses need to acquire, organize and analyze the deluge of data to make better decisions. There will need to be a coexistence of structured and unstructured data and to maintain a tight link between the two to extract maximum insights. MDM is the catalyst that helps maintain that tight linkage by providing an understanding about the identity, characteristics of Persons, Companies, Products, Suppliers, etc. associated with the Big Data and thereby help accelerate ROI. In my next post I will discuss about patterns for co-existing Big Data Solutions and MDM. Feel free to provide comments and thoughts on above as well as Integration or Architectural patterns.

    Read the article

  • It could be worse....

    - by Darryl Gove
    As "guest" pointed out, in my file I/O test I didn't open the file with O_SYNC, so in fact the time was spent in OS code rather than in disk I/O. It's a straightforward change to add O_SYNC to the open() call, but it's also useful to reduce the iteration count - since the cost per write is much higher: ... #define SIZE 1024 void test_write() { starttime(); int file = open("./test.dat",O_WRONLY|O_CREAT|O_SYNC,S_IWGRP|S_IWOTH|S_IWUSR); ... Running this gave the following results: Time per iteration 0.000065606310 MB/s Time per iteration 2.709711563906 MB/s Time per iteration 0.178590114758 MB/s Yup, disk I/O is way slower than the original I/O calls. However, it's not a very fair comparison since disks get written in large blocks of data and we're deliberately sending a single byte. A fairer result would be to look at the I/O operations per second; which is about 65 - pretty much what I'd expect for this system. It's also interesting to examine at the profiles for the two cases. When the write() was trapping into the OS the profile indicated that all the time was being spent in system. When the data was being written to disk, the time got attributed to sleep. This gives us an indication how to interpret profiles from apps doing I/O. It's the sleep time that indicates disk activity.

    Read the article

  • What is the maximum number of TCP connections I can have in Windows Server 2008?

    - by evilfred
    I would like to have as many connections (single connections from many different clients) as humanly possible in a server running on Windows Server 2008, in order to support a Comet-style application. The application is written in C#. The connections will not be chatty, they just need to be open (and stay open). Buying boatloads of memory and fast CPUs are not a problem. As far as I can tell, I will be limited to 65k simultaneous open connections per NIC - the maximum number of ports. Is this accurate? Or can I go beyond 65k connections / NIC somehow? It seems like there are server products for Linux at least that support hundreds of thousands of connections. How do they do this?

    Read the article

  • CodePlex Daily Summary for Sunday, July 21, 2013

    CodePlex Daily Summary for Sunday, July 21, 2013Popular ReleasesMagick.NET: Magick.NET 6.8.6.601: Magick.NET linked with ImageMagick 6.8.6.6.MISAO: Ver. 5.33: Latest app and add-insC# Intellisense for Notepad++: Initial release: Members auto-complete Integration with native Notepad++ Auto-Completion Auto "open bracket" for methods Right-arrow to accept suggestions51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.19.4: One Click Install from NuGet This release introduces the 51Degrees.mobi IIS Vary Header Fix. When Compression and Caching is used in IIS, the Vary header is overwritten, making intelligent caching with dynamic content impossible. Find out more about installing the Vary Header fix. Changes to Version 2.1.19.4Handlers now have a ‘Count’ property. This is an integer value that shows how many devices in the dataset that use that handler. Provider.cs -> GetDeviceInfoByID to address a problem w...SalarDbCodeGenerator: SalarDbCodeGenerator v2.1.2013.0719: Version 2.1.2013.0719 2013/7/19 Pattern Changes: * DapperContext pattern is added. * All patterns are updated to work with one-to-one relations. Changes: * One-to-one relation is supported. * Minor bug fixes.Player Framework by Microsoft: Player Framework for Windows and WP (v1.3 beta 2): Includes all changes in v1.3 beta 1 Additional support for Windows 8.1 Preview New API (JS): addTextTrack New API (JS): msKeys New API (JS): msPlayToPreferredSourceUri New API (JS): msSetMediaKeys New API (JS): onmsneedkey New API (Xaml): SetMediaStreamSource method New API (Xaml): Stretch property New API (Xaml): StretchChanged event New API (Xaml): AreTransportControlsEnabled property New API (Xaml): IsFullWindow property New API (Xaml): PlayToPreferredSourceUri proper...Outlook 2013 Add-In: Multiple Calendars: As per popular request, this new version includes: - Support for multiple calendars. This can be enabled in the configuration by choosing which ones to show/hide appointments from. In some cases (public folders) it may time out and crash, and so far it only supports "My Calendars", so not shared ones yet. Also they're currently shown in the same font/color so there are no confusions with color categories, but please drop me a line on any suggestions you'd like to see implemented. - Added fri...Circuit Diagram: Circuit Diagram 2.0 Beta 2: New in this release: Show grid in editor Cut/copy/paste support Bug fixesDaRenamer: Renamer 2.1.0.5: Version 2.1.0.5 -fixed minor bugInstall Verify Tool: Install Verify Tool V 1.0: Win Service Web Service Win Service Client Web Service ClientOrchard Project: Orchard 1.7 RC: Planning releasedTerminals: Version 3.1 - Release: Changes since version 3.0:15992 Unified usage of icons in user interface Added context menu in Organize favorites grid Fixed:34219 34210 34223 33981 34209 Install notes:No changes in database (use database from release 3.0) No upgrade of configuration, passwords, credentials or favorites See also upgrade notes for release 3.0PMU Connection Tester: PMU Connection Tester v4.4.0: This is the current release build of the PMU Connection Tester, version 4.4.0 This version of the connection tester was released with openPDC 1.5 SP1 and openPDC 2.0 BETA. This application requires that .NET 4.0 already be installed on your system. Note this is the last release of the PMU Connection Tester that will built on .NET 4.0 using the TVA Code Library and the Time-series Framework. Future releases of the PMU Connection Tester will be built on .NET 4.5 (or later) using the Grid Sol...HiUpdateTools - easy publish and update your app: HiUpdateTools Add-in 1.0.0.5: - Generate ClientConfig.xml and adding to the project - Set ClientConfig.xml option "CopyToOutputDirectory"= Copy if newer - Fix client path not ending the backslash - Add Client assembly to VSX package - On first use, the tool is added to the reference to the client assembly - Fix client application - Multi-instance application - Run single instance of update applicationopen gaze and mouse analyzer: Ogama 4.4 BETA: This beta was published on 16.07.2013 and includes fixes and improvements since last 4.3 release, mainly in the recording section which solves problems with tobii and mirametrix devices, see the source code tab for details. Please test it, if you have one of this devices and give me feedback using the issue tracker or discussion tabs. Don´t forget to install .Net 4 framework and SQL Express before installing Ogama. When using Tobii tracking devices, you have to install apple bonjour also. On...SpaceFlight: SpaceFlight_v1.1: Added VCRedist.exe , run this first if you get the "MSVCP100.dll is missing" issueAdvanced Resource Tab for Blend: Advanced Resource Tab 2.0: Added filtering of (sub)-resource items and collapsing / expanding of all resource dictionaries.Media Companion: Media Companion MC3.573b: XBMC Link - Let MC update your XBMC Library Fixes in place, Enjoy the XBMC Link function Well, Phil's been busy in the background, and come up with a Great new feature for Media Companion. Currently only implemented for movies. Once we're happy that's working with no issues, we'll extend the functionality to include TV shows. All the help for this is build into the application. Go to General Preferences - XBMC Link for details. Help us make it better* Currently only tested on local and ...Wsus Package Publisher: Release v1.2.1307.15: Fix a bug where WPP crash if 'ShowPendingUpdates' is start with wrong credentials. Fix a bug where WPP crash if ArrivalDateAfter and ArrivalDateBefore is equal in the ComputerView. Add a filter in the ComputerView. (Thanks to NorbertFe for this feature request) Add an option, when right-clicking on a computer, you can ask for display the current logon user of the remote computer. Add an option in settings to choose if WPP ping remote computers using IPv4, IPv6 or IPv6 and, if fail, IP...Lab Of Things: vBeta1: Initial release of LoTNew ProjectsBlindspot: This project aims to create a fully-functional windowless desktop application allowing blind/visually impaired music fans the chance to access Spotify.HelloReading: ???????MonteMediaCC: The Monte Media Library is a Java library for processing media data. Supported media formats include still images, video, audio and meta-data. NETDeob: Deobfuscator and Unpacker for .NET Files.NthCatalanNumber: Write a program to calculate the Nth Catalan number by given N. http://en.wikipedia.org/wiki/Catalan_numberPokemon Battle Online 0791: ???????project site: the-west minimapPSeG Server FIles: PSeG Server FilesSample VariableSizedWrapGrid: This is an example of the use of particular VariableSizedWrapGrid GridView control. Where we can set the size of each item as needed It can make an appearance oServices Monitoring Management Pack: La supervision des services automatiques est un élément qui est déficient dans Operations Manager. Ce « Management Pack » sert à surveiller les services automatSinaIsTestingHisNewProject: this project is only for testing this site and any copying without my permission will be sewed by me and will be tracked by CIA and FBI and will be HeadShotSum of a sequence: Write a program that, for a given two integer numbers N and X, calculates the sum S = 1 + 1!/X + 2!/X2 + … + N!/XN Synchrophasor Analytics: Synchrophasor Analytics is a front end data processing and conditioning for downstream phasor based applications and an extension for development and analysis.tcp-bridge: a tcp bridge service to redirect incoming connections to another machine by using another incoming or outgoing connectionTypeScript Class Library: The TypeScript Class Library WPDialog: Library for developing app dialogs for Windows phone similar to Monotouch.DialogWPF File Renamer: Simple file renaming application made to brush up on my WPF data binding and MVVM skills.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Alternative to NV Occlusion Query - getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provide a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? ... ?

    Read the article

  • Is it possible to have non-English regional settings with English day/month names?

    - by Indrek
    I live in Estonia where most regional settings (number, currency and date formats) differ from those used in English-speaking countries. For instance, decimal symbol is comma, thousands separator is space, date format is day-month-year, etc. However, if I set my regional settings to Estonian, then day and month names are also shown in Estonian everywhere: This is slightly annoying since the language used for the rest of Windows is English and I'd like the day and month names to be consistent with it. Is this possible while still keeping the local regional settings? One workaround I've tried is to set regional settings to, say, English (UK) and then customise them to match Estonian settings, but that messes up alphabetic sorting - accented letters like "ö" and "ä" are no longer distinguished from their non-accented versions, and "z" is sorted as last rather than at its correct position in the Estonian alphabet (between "s" and "t"). OS is Windows 7 Professional, in case that matters. Edit: alternatively, if there's no built-in way to accomplish what I want, is it possible to create a custom set of regional settings (like one can create custom keyboard layouts)?

    Read the article

  • WMII Terminal Width of 80 Columns for xterm (colrules)

    - by BCable
    I'm trying to get WMII to split horizontally at 80 columns for xterm, but I'm only seeing a way to do this via percentage. It would be nice to be able to set it by something other than percentage for various resolutions, but if I have to deal with that I will. The problem is that even percentages don't work at my resolution (1366x768). 47+47 in /colrules yields 79 characters and 48+48 yields 81 characters. As far as I can tell, there is no decimal system allowed so I could do 47.5 for instance. I came from Ion3 and I'm used to using 80 column terminals, resizable by the keyboard, to get a reasonable cut off point for VIM when I'm coding. I would just settle with using the mouse, but WMII seems to be much more fluid than Ion3, so I would have to do it a LOT, which sounds annoying. Any ideas?

    Read the article

< Previous Page | 750 751 752 753 754 755 756 757 758 759 760 761  | Next Page >