Search Results

Search found 17610 results on 705 pages for 'specific'.

Page 555/705 | < Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >

  • Nested URLs and Rewrite rules in Apache2

    - by Radha Krishna. S.
    Hi, I need some help with rewrite rules and nested URLs. I am using TikiWiki for my website and am in the process of setting up SE friendly URLs for my projects. Specifically, I have the following rewrite rule for www.example.com/projects to point to a page that lists out all the projects hosted in example. RewriteRule ^Projects$ articles?type=Project [L] This works fine. Now, I would like to point www.example.com/projects/project1 to point to a specific project. I have this rewrite rule RewriteRule ^(Projects/Project1)$ tiki-read_article.php?articleId=6 This works, but partially. The content is all rendered as text but the theme - images/ css etc all go for a toss - the page is completely in text. I understand that this happens 'cause the relative paths in the theme/ css/ images all refer to Projects as the base folder instead of the root of the website. I don't want to touch the CMS portion - change the theme/ css/ image paths in the files, more for reasons of upgradability. Can someone help me understand and write a rule so that the above nested URL works? Regards, Radha

    Read the article

  • How can I enable PHP5 for a site? Having problems with every single method.

    - by user347662
    I'm working on a client site that is hosted on someone's DIY Debian Linux server [Apache/1.3.33 (Debian GNU/Linux)], and I'm trying to install a script that requires PHP5. By default, the server parses .php files with PHP 4.3.10-22, which is configured at /etc/php4/apache/php.ini, according to phpinfo(). On the server I can see a config directory for PHP5 adjacent to the PHP4 directory: /etc/php5.0/apache2/php.ini. I have tried multiple methods to enable PHP5 for the document root where the site's files are hosted, including all available methods mentioned here. By far, the most common suggestion I've found is to add one or both of the following lines to the site's .htaccess file: AddHandler application/x-httpd-php5 .php AddType application/x-httpd-php5 .php Trouble is, when either or both of those lines are present, the site forces my browser to download any .php files requested, without parsing the PHP at all. All of the other methods mentioned in the above article cause a 500 Internal Server Error. There is no hosting control panel I can access in a browser to enable PHP5 for the site, but I do have shell access. When I asked the server administrator about this issue, he encouraged me to search for the answer on Google. Where could I begin to troubleshoot this issue? Are there ways to test or verify the server's specific PHP5 installation and configuration, using the command line or some other method? Do you have other suggestions to enable PHP5?

    Read the article

  • Insert video clip in a lyx presentation and play it in GNU/Linux.

    - by Orjanp
    How can I insert a video clip into a presentation created in Lyx? Have seen http://www.latex-community.org/forum/viewtopic.php?f=19&t=48. It works, but there the video starts in the background in an external player. I would prefer it to be played in the presentation itself. If an external player is used it it should at least start in the foreground. But the presentation takes the foreground. Using evince in GNU/linux as pdf viewer. Beamer is used as a presentation template. Is it possible to play a video file in an embedded player in the presentation itself? Created an example presentation. The code is found below. \documentclass[english]{beamer} \usepackage{mathptmx} \usepackage[T1]{fontenc} \usepackage[latin9]{inputenc} \usepackage{amsmath} \usepackage{amssymb} \makeatletter %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Textclass specific LaTeX commands. % this default might be overridden by plain title style \newcommand\makebeamertitle{\frame{\maketitle}}% \AtBeginDocument{ \let\origtableofcontents=\tableofcontents \def\tableofcontents{\@ifnextchar[{\origtableofcontents}{\gobbletableofcontents}} \def\gobbletableofcontents#1{\origtableofcontents} } \makeatletter \long\def\lyxframe#1{\@lyxframe#1\@lyxframestop}% \def\@lyxframe{\@ifnextchar<{\@@lyxframe}{\@@lyxframe<*>}}% \def\@@lyxframe<#1>{\@ifnextchar[{\@@@lyxframe<#1>}{\@@@lyxframe<#1>[]}} \def\@@@lyxframe<#1>[{\@ifnextchar<{\@@@@@lyxframe<#1>[}{\@@@@lyxframe<#1>[<*>][}} \def\@@@@@lyxframe<#1>[#2]{\@ifnextchar[{\@@@@lyxframe<#1>[#2]}{\@@@@lyxframe<#1>[#2][]}} \long\def\@@@@lyxframe<#1>[#2][#3]#4\@lyxframestop#5\lyxframeend{% \frame<#1>[#2][#3]{\frametitle{#4}#5}} \makeatother \def\lyxframeend{} % In case there is a superfluous frame end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% User specified LaTeX commands. \usetheme{Warsaw} \usepackage{hyperref} \makeatother \usepackage{babel} \begin{document} \title{Testing video} \makebeamertitle \lyxframeend{}\section{Testing video} \lyxframeend{}\subsection{Testing video} \lyxframeend{}\lyxframe{Testing video} \href{run:video.wmv}{Movie} \appendix \lyxframeend{} \end{document}

    Read the article

  • Selectively intercepting methods using autofac and dynamicproxy2

    - by Mark Simpson
    I'm currently doing a bit of experimenting using Autofac-1.4.5.676, autofac contrib and castle DynamicProxy2. The goal is to create a coarse-grained profiler that can intercept calls to specific methods of a particular interface. The problem: I have everything working perfectly apart from the selective part. I gather that I need to marry up my interceptor with an IProxyGenerationHook implementation, but I can't figure out how to do this. My code looks something like this: The interface that is to be intercepted & profiled (note that I only care about profiling the Update() method) public interface ISomeSystemToMonitor { void Update(); // this is the one I want to profile void SomeOtherMethodWeDontCareAboutProfiling(); } Now, when I register my systems with the container, I do the following: // Register interceptor gubbins builder.RegisterModule(new FlexibleInterceptionModule()); builder.Register<PerformanceInterceptor>(); // Register systems (just one in this example) builder.Register<AudioSystem>() .As<ISomeSystemToMonitor>) .InterceptedBy(typeof(PerformanceInterceptor)); All ISomeSystemToMonitor instances pulled out of the container are intercepted and profiled as desired, other than the fact that it will intercept all of its methods, not just the Update method. Now, how can I extend this to exclude all methods other than Update()? As I said, I don't understand how I'm meant to say "for the ProfileInterceptor, use this implementation of IProxyHookGenerator". All help appreciated, cheers! Also, please note that I can't upgrade to autofac2.x right now; I'm stuck with 1.

    Read the article

  • Unit testing JSON output module, best practices

    - by Banang
    I am currently working on a module that takes one of our business objects and returns a json representation of that object to the caller. Due to limitations in our environment I am unable to use any existing json writer, so I have written my own, which is then used by the business object writer to serialize my objects. The json writer is tested in a way similar to this @Test public void writeEmptyArrayTest() { String expected = "[ ]"; writer.array().endArray(); assertEquals(expected, writer.toString()); } which is only manageable because of the small output each instruction produces, even though I keep feeling there must be a better way. The problem I am now facing is writing tests for the object writer module, where the output is much larger and much less manageable. The risk of spelling mistakes in the expected strings mucking up my tests seem too great, and writing code in this fashion seems both silly and unmanageable in a long term perspective. I keep feeling like I want to write tests to ensure that my tests are behaving correctly, and this feeling worries me. Therefore, is there a better way of doing this? Surely there must be? Does anyone know of any good literature in regard to this specific case (doesn't have to be json, but you know what I mean)? Grateful for all help.

    Read the article

  • PostgreSQL: Auto-partition a table

    - by Adam Matan
    Hi, I have a huge database which holds pairs of numbers (A,B), each ranging from 0 to 10,000 and stored as floats. e.g., (1, 9984.4), (2143.44, 124.243), (0.55, 0), ... Since the PostgreSQL table which stores these pairs grew quite large, I have decided to partition it into inheriting sub-tables. I intend to create 100 such tables, each storing a range of 1000x1000. The problem is that these numbers tend to come in large chunks of nearby numbers. It means that in the future, some tables will be nearly empty and some will hold a very large portion of the database. Unfortunately, the distribution of future pairs is yet unknown. I am looking for a way to automatically repartition my table. That means that if a certain subtable holds more than a specific number of pairs, it will be automatically partitioned into four sub-sub tables, and so on. My questions are: Is recursive partitioning and inheritance possible in PostgreSQL 8.3? Will indexes and query plans understand it? What's the best way to split a subtable once it grew too large? I should point out that this isn't a live database, so a downtime of few hours every week is totally acceptable. Thanks in advance, Adam

    Read the article

  • Reliable strtotime() result for different languages

    - by Maksee
    There was always a strange bug in Joomla when adding new article with back-end displayed with a language other than English (for me it's Russian). The field "Finish Publishing" started to be current date instead of "Never" equivalent in Russian. For a site in php4 finally found that strtotime function returns different results for arbitrary words. For "Never" it always -1 and joomla relies on this result in the JDate implementation. But in other case it sometimes returns a valid date. For russian translation of Never (???????) it is the case, but also for single "N" it is the case, so if one decided to change the string to some other he or she would face the same issue. So the code below <?php echo "Res:".strtotime("N")."<br>"; echo "Res:".strtotime("Nev")."<br>"; echo "Res:".strtotime("Neve")."<br>"; echo "Res:".strtotime("Never")."<br>"; ?> Outputs: Res:1271120400 Res:-1 Res:-1 Res:-1 So what are the solutions would be in this case? I would like not to write language-specific date.php handler, but to modify date method of JDate class, but what are language-neutral changes would be in order to detect invalid string. Thank you

    Read the article

  • Localization approach for XSLT + RESX in ASP.NET

    - by frankadelic
    I have an ASP.NET web app where the back end data (XML format) is transformed using XSLT, producing XHTML which is output into the page. Simplified code: XmlDocument xmlDoc = MyRepository.RetrieveXmlData(keyValue); XslCompiledTransform xsl = new XslCompiledTransform(); xsl.Load(pathToXsl, XsltSettings.TrustedXslt, null); StringWriter stringWriter = new StringWriter(); xsl.Transform(xmlDoc, null, stringWriter); myLiteral.Text = stringWriter.ToString(); Currently my XSL file contains the XHTML markup elements, as well as text labels, which are currently in English. for example: <p>Title:<br /> <xsl:value-of select="title"/> </p> <p>Description:<br /> <xsl:value-of select="desc"/> </p> I would like the text labels (Title and Description above) to be localized. I was thinking of using .NET resource files (.resx), but I don't know how the resx string resources would get pulled in to the XSLT when the transformation takes place. I would prefer not to have locale-specific copies of the XSLT file, since that means a lot of duplicate transformation logic. (NOTE: the XML data is already localized, so I don't need to change that)

    Read the article

  • Machine Learning Algorithm for Predicting Order of Events?

    - by user213060
    Simple machine learning question. Probably numerous ways to solve this: There is an infinite stream of 4 possible events: 'event_1', 'event_2', 'event_4', 'event_4' The events do not come in in completely random order. We will assume that there are some complex patterns to the order that most events come in, and the rest of the events are just random. We do not know the patterns ahead of time though. After each event is received, I want to predict what the next event will be based on the order that events have come in in the past. The predictor will then be told what the next event actually was: Predictor=new_predictor() prev_event=False while True: event=get_event() if prev_event is not False: Predictor.last_event_was(prev_event) predicted_event=Predictor.predict_next_event(event) The question arises of how long of a history that the predictor should maintain, since maintaining infinite history will not be possible. I'll leave this up to you to answer. The answer can't be infinte though for practicality. So I believe that the predictions will have to be done with some kind of rolling history. Adding a new event and expiring an old event should therefore be rather efficient, and not require rebuilding the entire predictor model, for example. Specific code, instead of research papers, would add for me immense value to your responses. Python or C libraries are nice, but anything will do. Thanks! Update: And what if more than one event can happen simultaneously on each round. Does that change the solution?

    Read the article

  • ASP.NET: aggregating validators in a user control

    - by orsogufo
    I am developing a web application where I would like to perform a set of validations on a certain field (an account name in the specific case). I need to check that the value is not empty, matches a certain pattern and is not already used. I tried to create a UserControl that aggregates a RequiredFieldValidator, a RegexValidator and a CustomValidator, then I created a ControlToValidate property like this: public partial class AccountNameValidator : System.Web.UI.UserControl { public string ControlToValidate { get { return ViewState["ControlToValidate"] as string; } set { ViewState["ControlToValidate"] = value; AccountNameRequiredFieldValidator.ControlToValidate = value; AccountNameRegexValidator.ControlToValidate = value; AccountNameUniqueValidator.ControlToValidate = value; } } } However, if I insert the control on a page and set ControlToValidate to some control ID, when the page loads I get an error that says Unable to find control id 'AccountName' referenced by the 'ControlToValidate' property of 'AccountNameRequiredFieldValidator', which makes me think that the controls inside my UserControl cannot resolve correctly the controls in the parent page. So, I have two questions: 1) Is it possible to have validator controls inside a UserControl validate a control in the parent page? 2) Is it correct and good practice to "aggregate" multiple validator controls in a UserControl? If not, what is the standard way to proceed?

    Read the article

  • How to tell what name RIA Services/EF Model uses for Associations?

    - by Nick Gotch
    Hi, I'm working on a C#.NET 3.5 WCF RIA Services app and having an issue with my Entity Framework model. My entity Foo is mapped to a DB table and has a primary key called FooId. My Bar is mapped to a DB view. I've selectively designed this view to generate a composite key in the EF using two of the columns (by making sure they were non-nullable and the others are all nullable. This was done using NULLIF and ISNULL in the view design.) I'm able to add this view to the model with no problem but I keep running into an issue when I try to map an association between the two. Foo should contain many Bars but I keep getting the following error when I add the association: Unable to retrieve AssociationType for association 'FK_Bar_Foo' According to this page, it looks like this might work if I can properly name the association (since RIA Services looks for specific names.) I've tried several variants of names that match the pattern of other associations with no success. Does anyone know if there's a place I can look to find out what name it's looking for? Thanks,

    Read the article

  • WPF DataValidation on a DataTemplate object in an ItemsControl

    - by Matt H.
    I have two datatemplates, both very similar... here is one of them: <DataTemplate x:Key="HeadingTemplate"> <Grid x:Name="mainHeadingGrid" Margin="5,5,30,0" HorizontalAlignment="Stretch"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition /> </Grid.ColumnDefinitions> <TextBlock Grid.Column="1" Margin="30,3,10,0" Foreground="Black" FontWeight="Bold" HorizontalAlignment="Left" TextWrapping="Wrap"> <TextBlock.Text> <MultiBinding Converter="{StaticResource myHeadingConverter}" ConverterParameter="getRNHeadingTitle" Mode="TwoWay"> <Binding Path="num"/> <Binding Path="name"/> </MultiBinding> </TextBlock.Text> </TextBlock> <TextBox Grid.Column="1" Text="{Binding Path=moreInfo}"/> </Grid> </DataTemplate> I use an selector in my ItemsControl to choose between the two, based on the object it is bound to. I want to use validation to check through all of the properties and put a big exclamation point in front of the whole datatemplate as it is displayed in the itemscontrol. how do I do this? All of the examples I've found explain how to set a ValidationRule on a specific control in the datatemplate, in that control's binding. I want to apply my validation rule to the entire template... Help! :)

    Read the article

  • Question about MySQLdb, OS X 10.5, and authentication

    - by timpone
    I'm a noob at Python and have been having problems with MySQLdb and OS X Leopard 10.5. I have a php app that is doing db access just fine with pdo but also want to access with Python. When I use the same credentials with MySQLdb as php, I get the following error: File "build/bdist.macosx-10.5-i386/egg/MySQLdb/connections.py", line 188, in __init__ _mysql_exceptions.OperationalError: (1045, "Access denied for user 'arc_db'@'localhost' (using password: YES)") The authentication piece works fine on my ubuntu server (installed via apt-get) implying that it is something specific to my OS X MySQLdb install. Looking at some postings, I thought it would be my local build of MySQLdb which seems to be problematic with OS X. But I am able to import fine: Python 2.5.1 (r251:54863, Feb 6 2009, 19:02:12) [GCC 4.0.1 (Apple Inc. build 5465)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import MySQLdb >>> Also, wanting to create a positive, I am able to access and return results from a database tilted test_something (which presumably bypasses the MySQL's authtentication - not sure exactly how though). Trying to figure out a little more what is going on, I turn on logging for mysql and get the following (added my own comments): 100609 19:09:45 3 Connect Access denied for user 'arc_db'@'localhost' (using password: YES) //not worked 100609 19:10:02 4 Connect arc_db@localhost on arc_development //did work I'm not really sure what the 3 or 4 means but presumably a sucess or failue. So, I guess what would be the next step? Am I doing some obvious stupid python mistake (very likely)? Is there a better way for me to prove that this should / can be working? Is there any way to determine what MySQLdb is sending exactly in its authentication message to MySQL? thanks

    Read the article

  • FluentNHibernate Overrides: UseOverridesFromAssemblyOf non-generic version

    - by ThiagoAlves
    Hi, I have a repository class that inherits from a generic implementation: public namespace RepositoryImplementation { public class PersonRepository : Web.Generics.GenericNHibernateRepository<Person> } The generic repository implementation uses Fluent NHibernate conventions. They're working fine. One of those conventions is that all properties are not nullable. Now I need to define that specific properties may be nullable outside the conventions. Fluent NHibernate has an interesting override mechanism: public namespace RepositoryImplementation { public class PersonMappingOverride : IAutoMappingOverride<Person> { public void Override(FluentNHibernate.Automapping.AutoMapping<Funcionario> mapping) { mapping.Map(x => x.PhoneNumber).Nullable(); } } } Now I need to register the override class into Fluent NHibernate. I have the following code in the Web.Generics.GenericNHibernateRepository generic class: AutoMap.AssemblyOf<Person>() .Where(type => type.Namespace == "Entities") .UseOverridesFromAssemblyOf<PersonMappingOverride>(); The problem is: UseOverridesFromAssemblyOf is a generic method, and I can't do something like that: .UseOverridesFromAssemblyOf<PersonMappingOverride>(); Because that would cause a circular reference. I don't want the generic repository to know the either repository or the mapping override class, because they vary from project to project. I see another solution: in the GenericNHibernateRepository class I can do this.GetType() and get the repository implementation type (e.g.: PersonRepository). However I can't call UseOverridesFromAssemblyOf() passing a type. Is there another way to configure overrides in FluentNHibernate? If not, how could I call UseOverridesFromAssemblyOf<T> without making the generic repository depend upon the repository implementation or the mapping override class? (Source: http://wiki.fluentnhibernate.org/Auto_mapping#Overrides)

    Read the article

  • iPhone - User Defaults and UIImages

    - by Staros
    Hello, I've been developing an iPhone app for the last few months. Recently I wanted to up performance and cache a few of the images that are used in the UI. The images are downloaded randomly from the web by the user so I can't add specific images to the project. I'm also already using NSUserDefaults to save other info within the app. So now I'm attempting to save a dictionary of UIImages to my NSUserDefaults object and get... -[UIImage encodeWithCoder:]: unrecognized selector sent to instance I then decided to subclass UIImage with a class named UISaveableImage and implement NSCoding. So now I'm at... @implementation UISaveableImage -(void)encodeWithCoder:(NSCoder *)encoder { [encoder encodeObject:super forKey:@"image"]; } -(id)initWithCoder:(NSCoder *)decoder { if (self=[super init]){ super = [decoder decodeObjectForKey:@"image"]; } return self; } @end which isn't any better than where I started. If I was able to convert an UIImage to NSData I would be good, but all I can find are function like UIImagePNGRepresentation which require me to know what type of image this was. Something that UIImage doesn't allow me to do. Thoughts? I feel like I might have wandered down the wrong path...

    Read the article

  • Google Analytics API Authentication Speedup

    - by Paulo
    I'm using a Google Analytics API Class in PHP made by Doug Tan to retrieve Analytics data from a specific profile. Check the url here: http://code.google.com/intl/nl/apis/analytics/docs/gdata/gdataArticlesCode.html When you create a new instance of the class you can add the profile id, your google account + password, a daterange and whatever dimensions and metrics you want to pick up from analytics. For example i want to see how many people visited my website from different country's in 2009. //make a new instance from the class $ga = new GoogleAnalytics($email,$password); //website profile example id $ga->setProfile('ga:4329539'); //date range $ga->setDateRange('2010-02-01','2010-03-08'); //array to receive data from metrics and dimensions $array = $ga->getReport( array('dimensions'=>('ga:country'), 'metrics'=>('ga:visits'), 'sort'=>'-ga:visits' ) ); Now you know how this API class works, i'd like to adress my problem. Speed. It takes alot of time to retrieve multiple types of data from the analytics database, especially if you're building different arrays with different metrics/dimensions. How can i speed up this process? Is it possible to store all the possible data in a cache so i am able to retrieve the data without loading it over and over again?

    Read the article

  • Sync Vs. Async Sockets Performance in .NET

    - by Michael Covelli
    Everything that I read about sockets in .NET says that the asynchronous pattern gives better performance (especially with the new SocketAsyncEventArgs which saves on the allocation). I think this makes sense if we're talking about a server with many client connections where its not possible to allocate one thread per connection. Then I can see the advantage of using the ThreadPool threads and getting async callbacks on them. But in my app, I'm the client and I just need to listen to one server sending market tick data over one tcp connection. Right now, I create a single thread, set the priority to Highest, and call Socket.Receive() with it. My thread blocks on this call and wakes up once new data arrives. If I were to switch this to an async pattern so that I get a callback when there's new data, I see two issues The threadpool threads will have default priority so it seems they will be strictly worse than my own thread which has Highest priority. I'll still have to send everything through a single thread at some point. Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data. The N byte arrays that they deliver can't be processed on the threadpool threads because there's no guarantee that they represent N unique market data messages because TCP is stream based. I'll have to lock and put the bytes into an array anyway and signal some other thread that can process what's in the array. So I'm not sure what having N threadpool threads is buying me. Am I thinking about this wrong? Is there a reason to use the Async patter in my specific case of one client connected to one server?

    Read the article

  • php/mongodb: how does references work in php?

    - by harald
    hello, i asked this in the mongodb user-group, but was not satisfied with the answer, so -- maybe someone at stackoverflow can enlighten me: the question was: $b = array('x' => 1); $ref = &$b; $collection->insert($ref); var_dump($ref); $ref does not contain '_id', because it's a reference to $b, the handbook states. (the code snippet is taken from the php mongo documentation) i should add, that: $b = array('x' => 1); $ref = $b; $collection->insert($ref); var_dump($ref); in this case $ref contains the _id -- for those, who do not know, what the insert method of mongodb-php-driver does -- because $ref is passed by reference (note the $b with and without referencing '&'). on the other hand ... function test(&$data) { $data['_id'] = time(); } $b = array('x' => 1); $ref =& $b; test($ref); var_dump($ref); $ref contains _id, when i call a userland function. my question is: how does the references in these cases differ? my question is probably not mongodb specific -- i thought i would know how references in php work, but apparently i do not: the answer in the mongodb user-group was, that this was the way, how references in php work. so ... how do they work -- explained with these two code-snippets? thanks in advance!!!

    Read the article

  • Optimal Serialization of Primitive Types

    - by Greg Dean
    We are beginning to roll out more and more WAN deployments of our product (.Net fat client w/ IIS hosted Remoting backend). Because of this we are trying to reduce the size of the data on the wire. We have overridden the default serialization by implementing ISerializable (similar to this), we are seeing anywhere from 12% to 50% gains. Most of our efforts focus on optimizing arrays of primitive types. I would like to know if anyone knows of any fancy way of serializing primitive types, beyond the obvious? For example today we serialize an array of ints as follows: [4-bytes (array length)][4-bytes][4-bytes] Can anyone do significantly better? The most obvious example of a significant improvement, for boolean arrays, is putting 8 bools in each byte, which we already do. Note: Saving 7 bits per bool may seem like a waste of time, but when you are dealing with large magnitudes of data (which we are), it adds up very fast. Note: We want to avoid general compression algorithms because of the latency associated with it. Remoting only supports buffered requests/responses(no chunked encoding). I realize there is a fine line between compression and optimal serialization, but our tests indicate we can afford very specific serialization optimizations at very little cost in latency. Whereas reprocessing the entire buffered response into new compressed buffer is too expensive.

    Read the article

  • IIS, Web services, Time out error

    - by Eduard
    Hello, We’ve got problem with ASP.NET web application that uses web services of other system. I’ll describe our system architecture: we have web application and Windows services that uses the same web services. - Windows service works all the time and sends information to these web services once an hour. - Web application is designed for users to send the same information in manual behavior. The problem is when user sometimes tries to send information in manual behavior in the web application, .NET throws exception „The operation has timed out” (web?). At that time Windows service successfully sends all necessary information to these web services. IT stuff that supports these web services asserts that there was no any request from our web application at that time. Then we have restarted IIS (iisreset) and everything has started to work fine. This situation repeats all the time. There is no anti-virus or firewall on the server. My suggestion is that there is something wrong with IIS, patches, configuration or whatever? The only specific thing is that there are requests that can least 2 minutes (web service response wait time). We tried to reproduce this situation on our local test servers, but everything works fine. OS: Windows Server 2003 R2 .NET: 3.5

    Read the article

  • SQL Server 2005 Reporting Services and the Report Viewer

    - by Kendra
    I am having an issue embedding my report into an aspx page. Here's my setup: 1 Server running SQL Server 2005 and SQL Server 2005 Reporting Services 1 Workstation running XP and VS 2005 The server is not on a domain. Reporting Services is a default installation. I have one report called TestMe in a folder called TestReports using a shared datasource. If I view the report in Report Manager, it renders fine. If I view the report using the http ://myserver/reportserver url it renders fine. If I view the report using the http ://myserver/reportserver?/TestReports/TestMe it renders fine. If I try to view the report using http ://myserver/reportserver/TestReports/TestMe, it just goes to the folder navigation page of the home directory. My web application is impersonating somebody specific to get around the server not being on a domain. When I call the report from the report viewer using http ://myserver/reportserver as the server and /TestReports/TestMe as the path I get this error: For security reasons DTD is prohibited in this XML document. To enable DTD processing set the ProhibitDtd property on XmlReaderSettings to false and pass the settings into XmlReader.Create method. When I change the server to http ://myserver/reportserver? I get this error when I run the report: Client found response content type of '', but expected 'text/xml'. The request failed with an empty response. I have been searching for a while and haven't found anything that fixes my issue. Please let me know if there is more information needed. Thanks in advance, Kendra

    Read the article

  • UpdatePanel with GridView with LinkButton with Image Causes Full Postback

    - by Chris
    So this might be a fairly specific issue but I figured I'd post it since I spent hours struggling with it before I was able to determine the cause. <asp:GridView ID="gvAttachments" DataKeyNames="UploadedID" AutoGenerateColumns="false" OnSelectedIndexChanged="gvAttachments_SelectedIndexChanged" runat="server"> <EmptyDataTemplate>There are no attachments associated to this email template.</EmptyDataTemplate> <Columns> <asp:TemplateField ItemStyle-Width="100%"> <ItemTemplate> <asp:LinkButton CommandName="Select" runat="server"><img src="/images/icons/trashcan.png" style="border: none;" /></asp:LinkButton> </ItemTemplate> </asp:TemplateField> </Columns> In the ItemTemplate of the TemplateField of the GridView I have a LinkButton with an image inside of it. Normally I do this when I have an image with some text next to it but this time, for whatever reason, I just have the image. This causes the UpdatePanel to always do a full postback. SOLUTION: Change that LinkButton to be an ImageButton and the problem is solved. <asp:ImageButton ImageUrl="/images/icons/trashcan.png" Style="border: none;" CommandName="Select" runat="server" />

    Read the article

  • How to force ADO.Net to use only the System.String DataType in the readers TableSchema.

    - by Keith Sirmons
    Howdy, I am using an OleDbConnection to query an Excel 2007 Spreadsheet. I want force the OleDbDataReader to use only string as the column datatype. The system is looking at the first 8 rows of data and inferring the data type to be Double. The problem is that on row 9 I have a string in that column and the OleDbDataReader is returning a Null value since it could not be cast to a Double. I have used these connection strings: Provider=Microsoft.ACE.OLEDB.12.0;Data Source="ExcelFile.xlsx";Persist Security Info=False;Extended Properties="Excel 12.0;IMEX=1;HDR=No" Provider=Microsoft.Jet.OLEDB.4.0;Data Source="ExcelFile.xlsx";Persist Security Info=False;Extended Properties="Excel 8.0;HDR=No;IMEX=1" Looking at the reader.GetSchemaTable().Rows[7].ItemArray[5], it's dataType is Double. Row 7 in this schema correlates with the specific column in Excel I am having issues with. ItemArray[5] is its DataType column Is it possible to create a custom TableSchema for the reader so when accessing the ExcelFiles, I can treat all cells as text instead of letting the system attempt to infer the datatype? I found some good info at this page: Tips for reading Excel spreadsheets using ADO.NET The main quirk about the ADO.NET interface is how datatypes are handled. (You'll notice I've been carefully avoiding the question of which datatypes are returned when reading the spreadsheet.) Are you ready for this? ADO.NET scans the first 8 rows of data, and based on that guesses the datatype for each column. Then it attempts to coerce all data from that column to that datatype, returning NULL whenever the coercion fails! Thank you, Keith Here is a reduced version of my code: using (OleDbConnection connection = new OleDbConnection(BuildConnectionString(dataMapper).ToString())) { connection.Open(); using (OleDbCommand cmd = new OleDbCommand()) { cmd.Connection = connection; cmd.CommandText = SELECT * from [Sheet1$]; using (OleDbDataReader reader = cmd.ExecuteReader()) { using (DataTable dataTable = new DataTable("TestTable")) { dataTable.Load(reader); base.SourceDataSet.Tables.Add(dataTable); } } } }

    Read the article

  • Correctly use dependency injection

    - by Rune
    Me and two other colleagues are trying to understand how to best design a program. For example, I have an interface ISoda and multiple classes that implement that interface like Coke, Pepsi, DrPepper, etc.... My colleague is saying that it's best to put these items into a database like a key/value pair. For example: Key | Name -------------------------------------- Coke | my.namespace.Coke, MyAssembly Pepsi | my.namespace.Pepsi, MyAssembly DrPepper | my.namespace.DrPepper, MyAssembly ... then have XML configuration files that map the input to the correct key, query the database for the key, then create the object. I don't have any specific reasons, but I just feel that this is a bad design, but I don't know what to say or how to correctly argue against it. My second colleague is suggesting that we micro-manage each of these classes. So basically the input would go through a switch statement, something similiar to this: ISoda soda; switch (input) { case "Coke": soda = new Coke(); break; case "Pepsi": soda = new Pepsi(); break; case "DrPepper": soda = new DrPepper(); break; } This seems a little better to me, but I still think there is a better way to do it. I've been reading up on IoC containers the last few days and it seems like a good solution. However, I'm still very new to dependency injection and IoC containers, so I don't know how to correctly argue for it. Or maybe I'm the wrong one and there's a better way to do it? If so, can someone suggest a better method? What kind of arguments can I bring to the table to convince my colleagues to try another method? What are the pros/cons? Why should we do it one way? Unfortunately, my colleagues are very resistant to change so I'm trying to figure out how I can convince them.

    Read the article

  • CSS compilers and converting IE hacks to conditional css

    - by xckpd7
    Skip to bottom for question, but first, a little context. So I have been looking into CSS compilers (like Sass & Less) for a while, and have been really interested in them, not because they help me understand anything easier (I've been doing css for a couple of years now) but rather they cut down on cruft and help me see things easier. I recently have been looking into reliably implementing inline-block (and clearfix), which require lots of extraneous code & hacks. Now according to all the authorities in the field, I shouldn't put IE hacks in the same page I do my CSS in, I should make them conditional. But for me that is a really big hassle to go through and manage all this additional code, which is why I really like things like Less. Instead of applying unsemantic classes, you specify a mixin and apply it once, and you're all set. So I guess I got a little of the track (I wanted to explain my points) but bascially, I'm at the point where these CSS compilers are very useful for me, and allow me to abstract a lot of the cruft away, and reliably apply them once and then just compile it. I would like to have a way to be able to compile IE specific styles into their own conditional files (ala Less / Sass) so I don't have to deal with managing 2 files for no reason. Does anything like a script/applcation that runs and can make underscore / star hacks apart of their own file exist?

    Read the article

< Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >