Search Results

Search found 26256 results on 1051 pages for 'information science'.

Page 685/1051 | < Previous Page | 681 682 683 684 685 686 687 688 689 690 691 692  | Next Page >

  • LocalUser access for WCF hosted in IIS

    - by Eugarps
    I have tried every combination to allow unauthenticated access to WCF as in "LocalUser" accounts, in IIS without success. Here is what I've most recently tried: wsHttpBinding with Message security and mode set to "None". IIS anonymous access enabled, all others disabled, folder level access at default (but granted read access to "Users" which is all users in our domain). I understand I may not have provided enough information to solve the issue, but perhaps somebody can point me in the right direction - is this likely to be a IIS configuration issue or a WCF configuration issue... if WCF, is it likely to be a client level or server level issue? The error I get when attempting to access here is "User is not authenticated". We have ASMX services in the domain which are behaving properly, I am the first developer using WCF here.

    Read the article

  • Usability: call for action

    - by Shyam
    I am designing a page, with tiny portlets. Now, I personally like my actions on the right side, yet I wonder if there are methodologies that are targeted about usability. After all, most applications are aimed at the user. What about yourself? Do you prefer information to be on top, on the left or on the right? I've you need to take some sort of action, do you prefer buttons on the left? References to good books and webpages are very welcome!

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Ampersand in GET, PHP

    - by NightMICU
    I have a simple form that generates a new photo gallery, sending the title and a description to MySQL and redirecting the user to a page where they can upload photos. Everything worked fine until the ampersand entered the equation. The information is sent from a jQuery modal dialog to a PHP page which then submits the entry to the database. After Ajax completes successfully, the user is sent to the upload page with a GET URL to tell the page what album it is uploading to -- $.ajax ({ type: "POST", url: "../../includes/forms/add_gallery.php", data: $("#addGallery form").serialize(), success: function() { $("#addGallery").dialog('close'); window.location.href = 'display_album.php?album=' + title; } }); If the title has an ampersand, the Title field on the upload page does not display properly. Is there a way to escape ampersand for GET? Thanks

    Read the article

  • Dock Panel component for .NET that allows docking inside tab-pages?

    - by Lasse V. Karlsen
    I want to build a user-interface that, for historical reasons, has a lot of "columns" of information. Many of these aren't relevant for all users in all cases, so I thought I'd look at dock panels to allow the users to hide or rearrange the columns according to their job scenario. This is Winforms in .NET 3.5. As such, I'd like the following: Have tab-pages in the main form Each tab-page can have dock-panels docked into them Dock-panels should be movable from one tab-page to another I've tried the following component packages so far without luck: Telerik Allows me to dock inside a tab-page, but dock-panels can't move from one tab-page to another. When attempting to drop a floating panel onto a different tab-page than the one it came from, it appears the dock will succeed, but when dropped it is docked on its owner container. Divelements SandDoc Same problems as with Telerik. DevExpress XtraBars Same problems as with Telerik. Basically, does anyone know of any such component (package) that would allow me to do what I want?

    Read the article

  • How to prevent popups when loading a keystore

    - by Newtopian
    Hi as corollary to this question I wanted to ask if you know how to prevent the poping of dialogue either to ask for password or to ask to insert a certificate. We are currently building a system where we have to use the windows keystore to get certificates that are stored on USB token containing both reader and certificate. Unlike the original question we do not experience problems when loading the keystore but when we are accessing it. If there is only a single certificate in the keystore no problem, we get the appropriate password popup at the appropriate time and that's it. However if a second USB key gets inserted in the system and later removed the entry remains in the keystore and from then-on every time we try to access information in the keystore we get a popup to insert the key. This occurs for every certificates in the store for which the key is not currently connected to the computer. The system we are interfacing with that requires these certificates necessitates that we perform multiple cryptographic operations and to have these popups to come up every times is rather annoying to say the least.

    Read the article

  • C++/CLI: Compiling static library with /CLR support

    - by user289770
    We have old (working) code that consists of a static library compiled with /CLR, and a C++/CLI DLL that links to the static lib. We are about to add new features to this static lib. Now, I've have heard from numerous sources that CLR static libraries are not supported by Microsoft, and therefore I'm pushing to clean this up and switch to DLL before we start adding new features to this project. However, I haven't been able to find any official information from Microsoft regarding this (say, from MSDN - other than their forums). I will appreciate any resources about this whole "static lib with CLR" issue.

    Read the article

  • How to populate an ontology at runtime?

    - by Chan
    I have a configuration file with a lot of data like sensor locations, type, rules for activating devices etc. Basically related to a pervasive system. I plan to design an ontology for this domain. The doubt in my mind is how should I populate the ontology with the information in the configuration file, as the configuration files are going to change every now and then. Earlier I was planning to use XML, so I can just read the configuration file at runtime and create an XML as per the XSD. Do we use the same technique for Ontologies? If yes then what is the format of the populated ontology? Thanks Chan

    Read the article

  • Hex to Decimal conversion in C

    - by darkie15
    Hi All, Here is my code which is doing the conversion from hex to decimal. The hex values are stored in a unsigned char array: int liIndex ; long hexToDec ; unsigned char length[4]; for (liIndex = 0; liIndex < 4 ; liIndex++) { length[liIndex]= (unsigned char) *content; printf("\n Hex value is %.2x", length[liIndex]); content++; } hexToDec = strtol(length, NULL, 16); Each array element contains 1 byte of information and I have read 4 bytes. When I execute it, here is the output that I get : Hex value is 00 Hex value is 00 Hex value is 00 Hex value is 01 Chunk length is 0 Can any one please help me understand the error here. Th decimal value should have come out as 1 instead of 0. Regards, darkie

    Read the article

  • Performance differences between iframe hiding methods?

    - by Ender
    Is there a major performance difference between the following: <iframe style="visibility:hidden" /> <iframe style="width:0px; height:0px; border:0px" /> I'm using a hidden iframe to pull down and parse some information from an external server. If the iframe actually attempts to render the page, this may suck up a lot of CPU cycles. Of course, I'd ideally just want to get the raw markup - for example, if I could prevent the iframe from loading img tags, that would be perfect.

    Read the article

  • saving iPhone program state with a deep UINavigationController

    - by jr
    Can someone a good way to save the program state (UINavigationController stack, etc) of an iPhone application. My application obtains a bunch of information from the network and I want to return the person back to the last screen they were on, even if it was 3 or 4 screens deep. I assume that I will need to reload the data from the network along the way as I recreate the UINavigation controllers. I don't necessarily have a problem with this. I'm thinking about maybe having my UINavigationController objects implement some type of protocol which allow me to save/set their state? I'm looking to hear from others who may have needed to implement a similar scenario and how they accomplished it. My application has a UITabbarController at the root and UINavigationController items for each tab bar item. thanks!

    Read the article

  • Externalising Google Maps InfoWindow Content When Marker Is Selected

    - by Mark
    Hi, I'm wondering if anyone knows whether it is possible to take the content of a Google Maps InfoWindow and place it in an external DIV when the marker on the map is clicked? I've had a good dig around both the API docs and Google to see if I can find any examples or information relating to this but have had no luck so far. However I've not had a lot of time since I got asked about this one so I have had to skim a bit so it could be that I've missed something but nothing seems to be jumping out at me. Essentially I'd just like to know if this is indeed possible so that I don't waste anymore time researching something that is currently not possible with Google Maps. However if anyone has any code, examples, or ideas about how to go about doing this then that would be a very much appreciated! Thanks, Mark

    Read the article

  • Link phpbb usernames to drupal profiles

    - by Toxid
    I'm using drupal and phpbb with a bridge called phpbbforum. It works quite well, the user information is synched between the drupal and phpbb databases. The forum is embeded in a drupal page, so all variables that come with page.tpl.php should be avaliable. I want drupal to be the only profile handler, so when someone clicks on a phpbb username, that person get's linked to the drupal profile. In phpbbs template files, the link to the profile is called by function get_username_string. I think the right place to edit it is in the /includes/functions_content.php file on line 1178. Right above that line it says "* Get username details for placing into templates." and there's a section about profile links. I just can't figure out how to edit it so that the profile links lead to drupal profiles. Can anyone figure this one out?

    Read the article

  • Complex queries using Rails query language

    - by Daniel Johnson
    I have a query used for statistical purposes. It breaks down the number of users that have logged-in a given number of times. User has_many installations and installation has a login_count. select total_login as 'logins', count(*) as `users` from (select u.user_id, sum(login_count) as total_login from user u inner join installation i on u.user_id = i.user_id group by u.user_id) g group by total_login; +--------+-------+ | logins | users | +--------+-------+ | 2 | 3 | | 6 | 7 | | 10 | 2 | | 19 | 1 | +--------+-------+ Is there some elegant ActiveRecord style find to obtain this same information? Ideally as a hash collection of logins and users: { 2=>3, 6=>7, ... I know I can use sql directly but wanted to know how this could be solved in rails 3.

    Read the article

  • Trouble with authlogic_rpx

    - by Andrei
    Hi, I'm trying to run http://github.com/tardate/rails-authlogic-rpx-sample (only rails version was changed) but get error message http://gist.github.com/385696, when RPX returns information after successful authentication via Google Account. What is wrong here? And how I can fix it? The code was successfully tested with rails 2.3.3 by its author: http://rails-authlogic-rpx-sample.heroku.com/ I run on Windows with cygwin and rails (2.3.5), rpx_now (0.6.20), authlogic_rpx (1.1.1). Update In several hours RPX rejected my app http://img96.imageshack.us/img96/2508/14128362.png

    Read the article

  • What tools are people using to measure SQL Server database performance?

    - by Paul McLoughlin
    I've experimented with a number of techniques for monitoring the health of our SQL Servers, ranging from using the Management Data Warehouse functionality built into SQL Server 2008, through other commercial products such as Confio Ignite 8 and also of course rolling my own solution using perfmon, performance counters and collecting of various information from the dynamic management views and functions. What I am finding is that whilst each of these approaches has its own associated strengths, they all have associated weaknesses too. I feel that to actually get people within the organisation to take the monitoring of SQL Server performance seriously whatever solution we roll out has to be very simple and quick to use, must provide some form of a dashboard, and the act of monitoring must have minimal impact on the production databases (and perhaps even more importantly, it must be possible to prove that this is the case). So I'm interested to hear what others are using for this task? Any recommendations?

    Read the article

  • How to tune ASP.NET CreateUserWizard?

    - by Max
    I have created ASP.NET WebForms site on IIS 7.5. I want to create step by step user registration. I want to store the basic and detailed information about registered users in a specially created database table (not in aspnet_users table). I want to validate email first and then prevent next registration step for the user whose email address already exists in the database. At the last registration step I want to present summary form. All previous input and select fields should be duplicated in this form with "disabled" attribute. Please tell me how to adjust CreateUserWizard ASP.NET Control and web.config file to these needs?

    Read the article

  • Converting Json to Java

    - by Binaryrespawn
    Hi all, I want to be able to access properties from a json string within my java action method. The string is available by simply saying myJsonString = object.getJson(); Below is an example of what the string can look like: {'title': 'Computing and Information systems','id':1,'children': 'true','groups': [{'title': 'Level one CIS','id':2,'children': 'true','groups':[{'title': 'Intro To Computing and Internet','id':3,'children': 'false','groups':[]}]}]} In this string every json object contains an array of other json objects. The intention is to extract a list of id's where any given object possessing a group property that contains other json objects. I looked at google's Gson as a potential json plugin. Can anyone offer some form of guidance as to how I can generate java from this json string? Thank you, Kind regards.

    Read the article

  • Convert data retrieved from MySQL database into JSON object using Python/Django

    - by rohanbk
    I have a MySQL database called People which contains the following schema <id,name,foodchoice1,foodchoice2>. The database contains a list of people and the two choices of food they wish to have at a party (for example). I want to create some kind of Python web-service that will output a JSON object. An example of output should be like: { "guestlist": [ {"id":1,"name":"Bob","choice1":"chicken","choice2":"pasta"},{"id":2,"name":"Alice","choice1":"pasta","choice2":"chicken"} ], "partyname": "My awesome party", "day": "1", "month": "June", "2010": "null" } Basically every guest is stored into a dictionary 'guestlist' along with their choices of food. At the end of the JSON object is just some additional information that only needs to be mentioned once. The question that I have is regarding the method that I need to utilize to grab the data from my database, and create the JSON object. Do I need to use a standard Model/View structure of Django or can I get away with something that is much simpler since what I need to do is really simple?

    Read the article

  • asp.net free webcontrol to display crosstab or pivot reports with column and row grouping, subtotals

    - by dev-cu
    Hello, I want to develop some crosstab also know as pivot reports in Asp.net with x-axis and y-axis being dynamics, allowing grouping by row and column, for example: have products in y-axis and date in x-axis having in body number of sells of a given product in a given date, if date in x-axis are years, i want subtotals for each month for a product (row) and subtotals of sells of all products in date (column) I know there are products available to build reports, but i am using Mysql, so Reporting Service is not an option. It's not necessary for the client build additional reports, i think the simplest solution is having a control to display such information and not using crystal report (which is not free) or something more complex, i want to know if is there an available free control to reach my goal. Well, does anybody know a control or have a different idea, thanks in advance.

    Read the article

  • How can I compile a Perl script inside a running Perl session?

    - by Joel
    I have a Perl script that takes user input and creates another script that will be run at a later date. I'm currently going through and writing tests for these scripts and one of the tests that I would like to perform is checking if the generated script compiles successfully (e.g. perl -c <script>.) Is there a way that I can have Perl perform a compile on the generated script without having to spawn another Perl process? I've tried searching for answers, but searches just turn up information about compiling Perl scripts into executable programs.

    Read the article

  • does PHP 5.3 change the way file_get_contents works?

    - by ceejayoz
    I'm having an odd issue with PHP's file_get_contents. In the past, file_get_contents on a remote file returns the text of that file regardless of the HTTP status code returned. If I hit an API and it sends back JSON error information with a status of 500, file_get_contents gives me that JSON (with no indication that an error code was encountered). I've just set up a Ubuntu 10.04 server, which is the first Ubuntu to have PHP 5.3. Instead of giving me the JSON, PHP throws a warning when a 500 error is present. As a result, I can't parse the JSON and give a nice error message. It's nice that PHP is noticing there's an error in the remote file, but I need the JSON even (especially!) if there's a 500 error. There doesn't appear to be any way to switch this off. Has anyone encountered this? Any tips?

    Read the article

  • What is the email limit on Google Apps Script?

    - by jmvidal
    Can someone tell me if there is a webpage that lists the official Google limit on emails sent from a Google Apps Script? In testing my little script I got a Service invoked too many times: email (# 59) and now I can't send any more emails. The obvious place for this information would be in the MailApp.sendEmail documentation. But, that does not say anything about a limit. I found this discussion on the google forum from 2/11/10 where users discuss about a 100 or 500 emails/day limit, with a 24 hour ban, but no one from Google provided an official answer. Note that this is for google apps script, which is different from the google app engine, which does have well published limits.

    Read the article

  • Oracle Hash Cluster Overflow Blocks

    - by Andrew
    When inserting a large number of rows into a single table hash cluster in Oracle, it will fill up the block with any values that hash to that hash-value and then start using overflow blocks. These overflow blocks are listed as chained off the main block, but I can not find detailed information on the way in which they are allocated or chained. When an overflow block is allocated for a hash value, is that block exclusively allocated to that hash value, or are the overflow blocks used as a pool and different hash values can then start using the same overflow block. How is the free space of the chain monitored - in that, as data is continued to be inserted, does it have to traverse the entire chain to find out if it has some free space in the current overflow chain, and then if it finds none, it then chooses to allocate a new block?

    Read the article

  • break dataframe into subsets by factor values, send to function that returns glm class, how to recom

    - by Alex Holcombe
    Thanks to Hadley's plyr package ddply function we can take a dataframe, break it down into subdataframes by factors, send each to a function, and then combine the function results for each subdataframe into a new dataframe. But what if the function returns an object of a class like glm or in my case, a c("glm", "lm"). Then, these can't be combined into a dataframe can they? I get this error instead Error in as.data.frame.default(x[[i]], optional = TRUE, stringsAsFactors = stringsAsFactors) : cannot coerce class 'c("glm", "lm")' into a data.frame Is there some more flexible data structure that will accommodate all the complex glm class results of my function calls, preserving the information regarding the dataframe subsets? Or should this be done in an entirely different way?

    Read the article

< Previous Page | 681 682 683 684 685 686 687 688 689 690 691 692  | Next Page >