Search Results

Search found 2891 results on 116 pages for 'fs tech'.

Page 24/116 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • multi-core processing in R on windows XP - via doMC and foreach

    - by Jan
    Hi guys, I'm posting this question to ask for advice on how to optimize the use of multiple processors from R on a Windows XP machine. At the moment I'm creating 4 scripts (each script with e.g. for (i in 1:100) and (i in 101:200), etc) which I run in 4 different R sessions at the same time. This seems to use all the available cpu. I however would like to do this a bit more efficient. One solution could be to use the "doMC" and the "foreach" package but this is not possible in R on a Windows machine. e.g. library("foreach") library("strucchange") library("doMC") # would this be possible on a windows machine? registerDoMC(2) # for a computer with two cores (processors) ## Nile data with one breakpoint: the annual flows drop in 1898 ## because the first Ashwan dam was built data("Nile") plot(Nile) ## F statistics indicate one breakpoint fs.nile <- Fstats(Nile ~ 1) plot(fs.nile) breakpoints(fs.nile) # , hpc = "foreach" --> It would be great to test this. lines(breakpoints(fs.nile)) Any solutions or advice? Thanks, Jan

    Read the article

  • Loading .dll/.exe from file into temporary AppDomain throws Exception

    Hi Gang, I am trying to make a Visual Studio AddIn that removes unused references from the projects in the current solution (I know this can be done, Resharper does it, but my client doesn't want to pay for 300 licences). Anyhoo, I use DTE to loop through the projects, compile their assemblies, then reflect over those assemblies to get their referenced assemblies and cross-examine the .csproj file. Problem: since the .dll/.exe I loaded up with Reflection doesn't unload until the app domian unloads, it is now locked and the projects can't be built again because VS tries to re-create the files (all standard stuff). I have tried creating temporary files, then reflecting over them...no worky, still have locked original files (I totally don’t understand that BTW). Now I am now going down the path of creating a temporary AppDomain to load the files into and then destroy. I am having problems loading the files though: The way I understand AddDomain.Load is that I should create and send a byte array of the assembly to it. I do that: FileStream fs = new FileStream(assemblyFile, FileMode.Open); byte[] assemblyFileBuffer = new byte[(int)fs.Length]; fs.Read(assemblyFileBuffer, 0, assemblyFileBuffer.Length); fs.Close(); AppDomainSetup domainSetup = new AppDomainSetup(); domainSetup.ApplicationBase = assemblyFileInfo.Directory.FullName; AppDomain tempAppDomain = AppDomain.CreateDomain("TempAppDomain", null, domainSetup); Assembly projectAssembly = tempAppDomain.Load(assemblyFileBuffer); The last line throws an exception: "Could not load file or assembly 'WindowsFormsApplication1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified.":"WindowsFormsApplication3, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"}" Any help or thoughts would be greatly appreciated. My head is lopsided from beating it against the wall... Thanks, Dan

    Read the article

  • Stop CDATA tags from being output-escaped when writing to XML in C#

    - by Smallgods
    We're creating a system outputting some data to an XML schema. Some of the fields in this schema need their formatting preserved, as it will be parsed by the end system into potentially a Word doc layout. To do this we're using <![CDATA[Some formatted text]]> tags inside of the App.Config file, then putting that into an appropriate property field in a xsd.exe generated class from our schema. Ideally the formatting wouldn't be out problem, but unfortunately thats just how the system is going. The App.Config section looks as follows: <header> <![CDATA[Some sample formatted data]]> </header> The data assignment looks as follows: HeaderSection header = ConfigurationManager.GetSection("header") as HeaderSection; report.header = "<[CDATA[" + header.Header + "]]>"; Finally, the Xml output is handled as follows: xs = new XmlSerializer(typeof(report)); fs = new FileStream (reportLocation, FileMode.Create); xs.Serialize(fs, report); fs.Flush(); fs.Close(); This should in theory produce in the final Xml a section that has information with CDATA tags around it. However, the angled brackets are being converted into &lt; and &gt; I've looked at ways of disabling Outout Escaping, but so far can only find references to XSLT sheets. I've also tried @"<[CDATA[" with the strings, but again no luck. Any help would be appreciated!

    Read the article

  • How do I set the LRECL in C#.NET?

    - by donde
    I have been trying to ftp a dtl file from .net to, what I beleive, is an AS400. The error being reported back to me is: "One or more lines have been truncated" and the admin is saying the file is coming over with 256 lines that have variable length columns. I found this explanation online: we have to establish defaults because no specifics about the file exist. The default RECFM is V and LRECL is 256. This means that SAS will scan the input record looking for the CR & LF to tell us that we are at the EOR. If the marker isn't found within the limit of the LRECL, SAS discards the data from the LRECL value to the end of the record and adds a message to the Log that "One or more lines have been truncated". So I need to set the LRECL. How do I do this in C# .NET? FtpWebRequest ftp = (FtpWebRequest)FtpWebRequest.Create(ftpfullpath); ftp.Credentials = new NetworkCredential(user, pwd); ftp.KeepAlive = false; ftp.UseBinary = false; ftp.Method = WebRequestMethods.Ftp.UploadFile; FileStream fs = File.OpenRead(inputfilepath + ftpfileName); byte[] buffer = new byte[fs.Length]; fs.Read(buffer, 0, buffer.Length); fs.Close(); Stream ftpstream = ftp.GetRequestStream(); int i = 0; int intBlock = 1786; int intBuffLeft = buffer.Length; while (i < buffer.Length) { if (intBuffLeft >= 1786) { ftpstream.Write(buffer, i, intBlock); } else { ftpstream.Write(buffer, i, intBuffLeft); } i += intBlock; intBuffLeft -= 1786; } ftpstream.Close();

    Read the article

  • C# comparing two files regex problem.

    - by Mike
    Hi everyone, what I'm trying to do is open a huge list of files (about 40k records, and match them on a line in a file that contains 2 millions records. And if my line from file A matches a line in file B write out that line. File A contains a bunch of files without extensions and file B contains full file paths including extensions. i'm using this but i cant get it to go... string alphaFilePath = (@"C:\Documents and Settings\g\Desktop\Arrp\Find\natst_ready.txt"); List<string> alphaFileContent = new List<string>(); using (FileStream fs = new FileStream(alphaFilePath, FileMode.Open)) using (StreamReader rdr = new StreamReader(fs)) { while (!rdr.EndOfStream) { alphaFileContent.Add(rdr.ReadLine()); } } string betaFilePath = @"C:\Documents and Settings\g\Desktop\Arryup\Find\eble.txt"; StringBuilder sb = new StringBuilder(); using (FileStream fs = new FileStream(betaFilePath, FileMode.Open)) using (StreamReader rdr = new StreamReader(fs)) { while (!rdr.EndOfStream) { string betaFileLine = rdr.ReadLine(); string matchup = Regex.Match(alphaFileContent, @"(\\)(\\)(\\)(\\)(\\)(\\)(\\)(\\)(.*)(\.)").Groups[9].Value; if (alphaFileContent.Equals(matchup)) { File.AppendAllText(@"C:\array_tech.txt", betaFileLine); } } } This doesnt work because the alphafilecontent is a single line only and i'm having a hard time figuring out how to get my regex to work on the file that contains all the file paths (Betafilepath) here is a sample of the beta file path. C:\arres_i\Grn\Ora\SEC\DBZ_EX1\Nes\001\DZO-EX00001.txt Here is the line i'm trying to compare from my alpha DZO-EX00001

    Read the article

  • Exited event of Process is not rised?

    - by Kanags.Net
    In my appliation,I am opening an excel sheet to show one of my Excel documents to the user.But before showing the excel I am saving it to a folder in my local machine which in fact will be used for showwing. While the user closes the application I wish to close the opened excel files and delete all the excel files which are present in my local folder.For this, in the logout event I have written code to close all the opened files like shown below, Process[] processes = Process.GetProcessesByName(fileType); foreach (Process p in processes) { IntPtr pFoundWindow = p.MainWindowHandle; if (p.MainWindowTitle.Contains(documentName)) { p.CloseMainWindow(); p.Exited += new EventHandler(p_Exited); } } And in the process exited event I wish to delete the excel file whose process is been exited like shown below void p_Exited(object sender, EventArgs e) { string file = strOriginalPath; if (File.Exists(file)) { //Pdf issue fix FileStream fs = new FileStream(file, FileMode.Open, FileAccess.Read); fs.Flush(); fs.Close(); fs.Dispose(); File.Delete(file); } } But the problem is this exited event is not called at all.On the other hand if I delete the file after closing the MainWindow of the process I am getting an exception "File already used by another process". Could any help me on how to achieve my objective or give me an reason why the process exited event is not being called?

    Read the article

  • Get Youtube Video ID from html code with PHP

    - by asumaran
    I want to get all only youtube video ID from html code look the (or multiple) object/embed code for youtube video // html from database <p>loremm ipsum dolor sit amet enot <a href="link" attribute=""blah blah blah">anchor link</a> </p> <object width="425" height="344"> <param name="movie" value="http://www.youtube.com/v/Ou5eVl5eqtg&hl=es_ES&fs=1&"></param> <param name="allowFullScreen" value="true"></param> <param name="allowscriptaccess" value="always"></param> <embed src="http://www.youtube.com/v/Ou5eVl5eqtg&hl=es_ES&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"> </embed> </object> <image src="path/to/image.ext" > <p>lorem ipsum dolor sit amet... blah</p> <p>lorem ipsum dolor sit amet... blah</p> <object width="425" height="344"> <param name="movie" value="http://www.youtube.com/v/Ou5eVl5eqtg&hl=es_ES&fs=1&"></param> <param name="allowFullScreen" value="true"></param> <param name="allowscriptaccess" value="always"></param> <embed src="http://www.youtube.com/v/Ou5eVl5eqtg&hl=es_ES&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"> </embed> </object> <p>blah</p> blah<br/> blah<br/> blah<br/>

    Read the article

  • (closed) nodejs responding an image (via pipe and response.end()) leads into strange behaviour

    - by johannesboyne
    What the Problem was: I had 2 different writeStreams! If WriteStram#1 is closed, the second should be closed too and then it all should be piped... BUT node is asynchronious so while one has been closed, the other one hasn't. Even the stream.end() was called... well you always should wait for the close event! thx guys for your help! I am at my wit's end. I used this code to pipe an image to my clients: req.pipe(fs.createReadStream(__dirname+'/imgen/cached_images/' + link).pipe(res)) it does work but sometimes the image is not transferred completely. But no error is thrown neither on client side (browser) nor on server side (node.js). My second try was var img = fs.readFileSync(__dirname+'/imgen/cached_images/' + link); res.writeHead(200, { 'Content-Type' : 'image/png' }); res.end(img, 'binary'); but it leads to the same strange behaviour... Does anyone got a clue for me? Regards! (abstracted code...) var http = require('http'); http.createServer(function (req, res) { Imgen.generateNew( 'virtualtwins/www_leonardocampus_de/overview/28', 'www.leonardocampus.de', 'overview', '28', null, [], [], function (link) { fs.stat(__dirname+'/imgen/cached_images/' + link, function(err, file_info) { if (err) { console.log('err', err); } console.log('file info', file_info.size); res.writeHead(200, 'image/png'); fs.createReadStream(__dirname+'/imgen/cached_images/' + link).pipe(res); }); } ); }).listen(13337, '127.0.0.1'); Imgen.generateNew just creates a new file, saves it to the disk and gives back the path (link)

    Read the article

  • Must-see sessions at TCUK11

    - by Roger Hart
    Technical Communication UK is probably the best professional conference I've been to. Last year, I spoke there on content strategy, and this year I'll be co-hosting a workshop on embedded user assistance. Obviously, I'd love people to come along to that; but there are some other sessions I'd like to flag up for anybody thinking of attending. Tuesday 20th Sept - workshops This will be my first year at the pre-conference workshop day, and I'm massively glad that our workshop hasn't been scheduled along-side the one I'm really interested in. My picks: It looks like you're embedding user assistance. Would you like help? My colleague Dom and I are presenting this one. It's our paen to Clippy, to the brilliant idea he represented, and the crashing failure he was. Less precociously, we'll be teaching embedded user assistance, Red Gate style. Statistics without maths: acquiring, visualising and interpreting your data This doesn't need to do anything apart from what it says on the tin in order to be gold dust. But given the speakers, I suspect it will. A data-informed approach is a great asset to technical communications, so I'd recommend this session to anybody event faintly interested. The speakers here have a great track record of giving practical, accessible introductions to big topics. Go along. Wednesday 21st Sept - day one There's no real need to recommend the keynote for a conference, but I will just point out that this year it's Google's Patrick Hofmann. That's cool. You know what else is cool: Focus on the user, the rest follows An intro to modelling customer experience. This is a really exciting area for tech comms, and potentially touches on one of my personal hobby-horses: the convergence of technical communication and marketing. It's all part of delivering customer experience, and knowing what your users need lets you help them, sell to them, and delight them. Content strategy year 1: a tale from the trenches It's often been observed that content strategy is great at banging its own drum, but not so hot on compelling case studies. Here you go, folks. This is the presentation I'm most excited about so far. On a mission to communicate! Skype help their users communicate, but how do they communicate with them? I guess we'll find out. Then there's the stuff that I'm not too excited by, but you might just be. The standards geeks and agile freaks can get together in a presentation on the forthcoming ISO standards for agile authoring. Plus, there's a session on VBA for tech comms. I do have one gripe about day 1. The other big UK tech comms conference, UA Europe, have - I think - netted the more interesting presentation from Ellis Pratt. While I have no doubt that his TCUK case study on producing risk assessments will be useful, I'd far rather go to his talk on game theory for tech comms. Hopefully UA Europe will record it. Thursday 22nd Sept - day two Day two has a couple of slots yet to be confirmed. The rumour is that one of them will be the brilliant "Questions and rants" session from last year. I hope so. It's not ranting, but I'll be going to: RTFMobile: beyond stating the obvious Ultan O'Broin is an engaging speaker with a lot to say, and mobile is one of the most interesting and challenging new areas for tech comms. Even if this weren't a research-based presentation from a company with buckets of technology experience, I'd be going. It is, and you should too. Pattern recognition for technical communicators One of the best things about TCUK is the tendency to include sessions that tackle the theoretical and bring them towards the practical. Kai and Chris delivered cracking and well-received talks last year, and I'm looking forward to seeing what they've got for us on some of the conceptual underpinning of technical communication. Developing an interactive non-text learning programme Annoyingly, this clashes with Pattern Recognition, so I hope at least one of the streams is recorded again this year. The idea of communicating complex information without words us fascinating and this sounds like a great example of this year's third stream: "anything but text". For the localization and DITA crowds, there's rich pickings on day two, though I'm not sure how many of those sessions I'm interested in. In the 13:00 - 13:40 slot, there's an interesting clash between Linda Urban on re-use and training content, and a piece on minimalism I'm sorely tempted by. That's my pick of #TCUK11. I'll be doing a round-up blog after the event, and probably talking a bit more about it beforehand. I'm also reliably assured that there are still plenty of tickets.

    Read the article

  • Visual Studio project using the Quality Center API remains in memory

    - by Traveling Tech Guy
    Hi, Currently developing a connector DLL to HP's Quality Center. I'm using their (insert expelative) COM API to connect to the server. An Interop wrapper gets created automatically by VStudio. My solution has 2 projects: the DLL and a tester application - essentially a form with buttons that call functions in the DLL. Everything works well - I can create defects, update them and delete them. When I close the main form, the application stops nicely. But when I call a function that returns a list of all available projects (to fill a combo box), if I close the main form, VStudio still shows the solution as running and I have to stop it. I've managed to pinpoint a single function in my code that when I call, the solution remains "hung" and if I don't, it closes well. It's a call to a property in the TDC object get_VisibleProjects that returns a List (not the .Net one, but a type in the COM library) - I just iterate over it and return a proper list (that I later use to fill the combo box): public List<string> GetAvailableProjects() { List<string> projects = new List<string>(); foreach (string project in this.tdc.get_VisibleProjects(qcDomain)) { projects.Add(project); } return projects; } My assumption is that something gets retained in memory. If I run the EXE outside of VStudio it closes - but who knows what gets left behind in memory? My question is - how do I get rid of whatever calling this property returns? Shouldn't the GC handle this? Do I need to delve into pointers? Things I've tried: getting the list into a variable and setting it to null at the end of the function Adding a destructor to the class and nulling the tdc object Stepping through the tester function application all the way out, whne the form closes and the Main function ends - it closes, but VStudio still shows I'm running. Thanks for your assistance!

    Read the article

  • unexpected output

    - by tech-ref
    hi, i wrote a function wich works as expected but i don't understand why the output is like that. function datatype prop = Atom of string | Not of prop | And of prop*prop | Or of prop*prop; (* XOR = (A And Not B) OR (Not A Or B) *) local fun do_xor (alpha,beta) = Or( And( alpha, Not(beta) ), Or(Not(alpha), beta)) in fun xor (alpha,beta) = do_xor(alpha,beta); end; test val result = xor(Atom "a",Atom "b"); output val result = Or (And (Atom #,Not #),Or (Not #,Atom #)) : prop thanks again (specially zeuxcg)

    Read the article

  • Django tests failing on invalid keyword argument

    - by Darwin Tech
    I have a models.py like so: from django.db import models from django.contrib.auth.models import User from datetime import datetime class UserProfile(models.Model): user = models.OneToOneField(User) def __unicode__(self): return self.user.username class Project(models.Model): user = models.ForeignKey(UserProfile) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) product = models.ForeignKey('tool.product') module = models.ForeignKey('tool.module') model = models.ForeignKey('tool.model') zipcode = models.IntegerField(max_length=5) def __unicode__(self): return unicode(self.id) And my tests.py: from django.test import TestCase, Client # --- import app models from django.contrib.auth.models import User from tool.models import Module, Model, Product from user_profile.models import Project, UserProfile # --- unit tests --- # class UserProjectTests(TestCase): fixtures = ['admin_user.json'] def setUp(self): self.product1 = Product.objects.create( name='bar', ) self.module1 = Module.objects.create( name='foo', enable=True ) self.model1 = Model.objects.create( module=self.module1, name='baz', enable=True ) self.user1 = User.objects.get(pk=1) ... def test_can_create_project(self): self.project1 = Model.objects.create( user=self.user1, product=self.product1, module=self.module1, model=self.model1, zipcode=90210 ) self.assertEquals(self.project1.zipcode, 90210) But I get a TypeError: 'product' is an invalid keyword argument for this function error. I'm not sure what is failing but I'm guessing something to do with the FK relationships... Any help would be much appreciated.

    Read the article

  • How to handle SalesForce WSDL files for sandbox and production sites in ASP.Net?

    - by Traveling Tech Guy
    I need to authenticate users and get info about them from an ASP.Net application. Since I have 2 sites (sandbox, production) and 2 org IDs - I needed to generate 2 SalesForce WSDL files. I diffed the 2 files (each about 600kb in size) and while they are 95% the same, there are enough differences strewn all over the place - enough for me to need to use them both. I added both as web references to my solution, and here's where my problem starts.Obviously, I cannot use both references in the same file, as they contain the same classes/functions. I had to write a quick-and-dirty solution over the weekend, so I just created 2 classes - each using a different web reference - but otherwise the exact functionality, and I use the appropriate one, based on the URL the user is coming from. This works well, but strikes me as a bad (read: quick-and-dirty) solution. My question: is there any way to do one or more of the following: change the web reference on the fly? use both web references in the same file, but put one in a different namespace? find a better solution to the whole situation? I nd up with a huge XmlSerializer.dll (3mb!) - probably due to using both huge WSDL files. Thanks for your time.

    Read the article

  • Visual Studio crashes consistently on web-related projects

    - by Traveling Tech Guy
    Hi, I have a brand new VS2010 installed on a Win2008R2 machine. I started getting this error when debugging a WCF service project: "Attempted to read or write protected memory. This is often an indication that other memory is corrupt." When I started developing a web site a week later, this became consistent - I can't debug it. The stack dump reads: at Microsoft.VisualStudio.WebHost.Host.ProcessRequest(Connection conn) at Microsoft.VisualStudio.WebHost.Server.OnSocketAccept(Object acceptedSocket) at System.Threading.QueueUserWorkItemCallback.WaitCallback_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem() at System.Threading.ThreadPoolWorkQueue.Dispatch() at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback() I tried searching online, and some recommend turning off the "Suppress JIT Optimizations" in the Debugging options - this dos not seem to make a difference. Clearly the problem is with the built in web server. But am I doing something wrong? Is there something I can do? Or is this a known bug? Thanks for your time, Guy Update 12/31: Today I tried using CassiniDev as a replacement to the original VS2010 WebServer - exact same result. My suspicion is that there's some internal conflict between VS2010, Windows Server 2008R2 and maybe the fact that it's a 64 bit OS. I switched to using IIS as my debug server - and that seems to work, with some annoying side effects. My conclusion: do not use a 64 bit server system as your dev machine. Develop on 32bit - deploy to 64bit. Side conclusion: there are some scenarios Microsoft's QA doesn't test.

    Read the article

  • Visual Studio project remains "stuck" when stopped

    - by Traveling Tech Guy
    Hi, Currently developing a connector DLL to HP's Quality Center. I'm using their (insert expelative) COM API to connect to the server. An Interop wrapper gets created automatically by VStudio. My solution has 2 projects: the DLL and a tester application - essentially a form with buttons that call functions in the DLL. Everything works well - I can create defects, update them and delete them. When I close the main form, the application stops nicely. But when I call a function that returns a list of all available projects (to fill a combo box), if I close the main form, VStudio still shows the solution as running and I have to stop it. I've managed to pinpoint a single function in my code that when I call, the solution remains "hung" and if I don't, it closes well. It's a call to a property in the TDC object get_VisibleProjects that returns a List (not the .Net one, but a type in the COM library) - I just iterate over it and return a proper list (that I later use to fill the combo box): public List<string> GetAvailableProjects() { List<string> projects = new List<string>(); foreach (string project in this.tdc.get_VisibleProjects(qcDomain)) { projects.Add(project); } return projects; } My assumption is that something gets retained in memory. If I run the EXE outside of VStudio it closes - but who knows what gets left behind in memory? My question is - how do I get rid of whatever calling this property returns? Shouldn't the GC handle this? Do I need to delve into pointers? Things I've tried: getting the list into a variable and setting it to null at the end of the function Adding a destructor to the class and nulling the tdc object Stepping through the tester function application all the way out, whne the form closes and the Main function ends - it closes, but VStudio still shows I'm running. Thanks for your assistance!

    Read the article

  • JQuery Facebook like Tag feature

    - by Tech
    Hi, Does anyone know of a very basic tag JQuery script? (Facebook like tag feature) All I want to do is when I type @ in a text box for it to search an AJAX search, it then brings back a display name i.e. "Bob Johnson" and an associated Id and stores the Id in a comma delimited list. So an example would be the following .... [TextArea] [Name 1] [Name 2] [Name 3] The thing is it also needs to remove this entry (Id) once the tag gets removed, also it needs to load into a DIV which has a scrollbar so that the list doesn't get to big. If anyone know's how/if this is possible that would be great. Thanks.

    Read the article

  • How can I add two CSS heights of different divs and make that the height to another div with jquery

    - by NW Tech
    I have three divs, 1 floated left and the other two floated right (they're stacked). I'm trying to combine the height of the two on the right and apply that height to the one on the left. This is what I have, with no luck $(window).bind("load", function() { var slw = $('div.2').height(); var lw = $('div.3').height(); var result = slw += lw; $('div.1').css({ 'height': result + 'px' }); }); TIA

    Read the article

  • compare date split across colums

    - by alex-tech
    Greetings. I am querying tables from Microsoft SQL 2008 which have date split across 3 columns: day, month and year. Unfortunately, I do not have control over this because data is coming in to the database daily from a 3rd party source in that format. I need to add between to a where clause so user can pull records within a range. Would be easy enough if date was in a single column but finding it nearly impossible when its split across three columns. To display the date, I am doing a CAST( CAST(year as varchar(4)) + '-' + CAST(month as varchar(2)) + '-' + CAST(day as varchar(2)) as date) AS "date"` in a select. I tried to put it as a parameter for datediff function or just the regular between but get no results. Thanks for any help.

    Read the article

  • HTC to launch Windows 7 phone in India

    - by samsudeen
    It is a good news for the Indian smart phone users as the wait is finally over for Windows 7 mobile.The Taiwanese  mobile giant HTC is all set to release its Windows 7 based Smartphone series in India from January. HTC HD7 & HTC Mozart , the two smart phones running on Windows 7 OS started appearing on the HTC Indian website (HTC India) from last week.Though Flip kart (Indian online e-commerce website)  has started getting pre -orders for HTC HD7 a month ago , the buzz has started from last week after the introduction of “HTC Mozart”. The complete feature comparison between both the smart phones is given below. Feature Comparison HTC Mozart HTC HD 7 Microsoft Windows 7 Microsoft Windows 7 Qualcomm Snapdragon Processor QSD 8250 1 GHz CPU Qualcomm Snapdragon Processor QSD 8250 1 GHz CPU 8MegaPixel camera with Xenon Flash 5 MP, 2592?1944 pixels, autofocus, dual-LED flash, 480 x 800 pixels, 3.7 inches 480 x 800 pixels, 4.3 inches 11.9mm thick and Weighs 130g 11.2 mm thick and Weighs 162 g Bluetooth 2.1 Bluetooth 2.1 8 GB of internal storage memory 8 GB of internal storage memory 512MB of ROM and 576 of RAM 512MB of ROM and 576 of RAM 3G HSDPA 7.2 Mbps and HSUPA 2 Mbps 3G HSDPA 7.2 Mbps; HSUPA 2 Mbps Wi-Fi 802.11 b/g/n Wi-Fi 802.11 b/g/n Micro-USB interconnector Micro-USB interconnector 3.5mm audio jack 3.5mm audio jack GPS antenna GPS antenna Standard battery Li-Po 1300 MA Standard battery, Li-Ion 1230 MA Standby 360 h (2G) up to 435 h (3G) Up to 310 h (2G) / Up to 320 h (3G) Talk time Up to 6 h 40 min (2G) and 5 h 30 min (3G) Up to 6 h 20 min (2G) / Up to 5 h 20 min (3G) Estimated Price “HTC HD 7″ is priced between  INR 27855 to 32000. though the price of “HDT Mozart” is officially not announced it is estimated to be around INR 30000. Where to Buy The Windows 7 phone is not yet available in stores directly, but most of the leading mobile stores are getting pre -orders. I have given some of the online store links below. Flip kart UniverCell This article titled,HTC to launch Windows 7 phone in India, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Evolution Of High Definition TV Viewing

    - by Gopinath
    The following guest post is written by Rob, who is also blogging on entertainment technology topics on iwantsky.com Gone are the days when you need to squint to be able to see the emotions on the faces of Humphrey Bogart and Ingrid Bergman as the lovers bid each other adieu in the classic film Casablanca. These days, watching an ordinary ant painstakingly carry a leaf in Animal Planet can be an exhilarating experience as you get to see not only the slightest movement but also the demarcation line between the insect’s head, thorax and abdomen. The crystal clear imagery was made possible by the sharp minds and the tinkering hands of the scientists that have designed the modern world’s HDTV. What is HDTV and what makes people so agog to have this new innovation in TV watching? HDTV stands for High Definition TV. Television viewing has indeed made a big leap. From the grainy black and whites, TV viewing had moved to colored TVs, progressed to SD TVs and now to HDTV. HDTV is the emerging trend in TV viewing as it delivers bigger and clearer pictures and better audio. Viewers can have a cinema-like TV viewing experience right in the comforts of their own home. With HDTV the viewer is allowed to have a better viewing range. With Standard (SD) TV, the viewer has to be at a distance that is from 3 to 6 times the size of the screen. HDTV allows the viewer to enjoy sharper and clearer images as it is possible to sit at a distance that is 1.5 or 3 times the size of the screen without noticing any image pixilation. Although HDTV appears to be a fairly new innovation, this system has actually existed in various forms years ago. Development of the HDTV was started in Europe as early as 1940s. However, the NTSC and the PAL/SECAM, the two analog TV standards became dominant and became popular worldwide. The analog TV was replaced by the digital TV platform in the 1990s. Even during the analog era, attempts have been made to develop HDTV. Japan has come out with MUSE system. However, due to channel bandwidth requirement concerns, the program was shelved. The entry of four organizations into the HDTV market spurred the development of a beneficial coalition. The AT&T, ATRC, MIT and Zenith HDTV combined forces. In 1993, a Grand Alliance was formed. This group is composed of researchers and HDTV manufacturers. A common standard for the broadcast system of HDTV was developed. In 1995, the system was tested and found successful. With the higher screen resolution of HDTV, viewing has never been more enjoyable. [Image courtesy: samsung] This article titled,Evolution Of High Definition TV Viewing, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • HTTP Headers: Max-Age vs Expires – Which One To Choose?

    - by Gopinath
    Caching of static content like images, scripts, styles on the client browser reduces load on the webservers and also improves end users browsing experience by loading web pages quickly. We can use HTTP headers Expires or Cache-Control:max-age to cache content on client browser and set expiry time for them. Expire header is HTTP/1.0 standard and Cache-Control:max-age is introduced in HTTP/1.1 specification to solve the issues and limitation with Expire  header. Consider the following headers.   Cache-Control: max-age=24560 Expires: Tue, 15 May 2012 06:17:00 GMT The first header instructs web browsers to cache the content for 24560 seconds relative to the time the content is downloaded and expire it after the time period elapses. The second header instructs web browser to expiry the content after 15th May 2011 06:17. Out of these two options which one to use – max-age or expires? I prefer max-age header for the following reasons As max-age  is a relative value and in most of the cases it makes sense to set relative expiry date rather than an absolute expiry date. Expire  header values are complex to set – time format should be proper, time zones should be appropriate. Even a small mistake in settings these values results in unexpected behaviour. As Expire header values are absolute, we need to  keep changing them at regular intervals. Lets say if we set 2011 June 1 as expiry date to all the image files of this blog, on 2011 June 2 we should modify the expiry date to something like 2012 Jan 1. This add burden of managing the Expire headers. Related: Amazon S3 Tips: Quickly Add/Modify HTTP Headers To All Files Recursively cc image flickr:rogue3w This article titled,HTTP Headers: Max-Age vs Expires – Which One To Choose?, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >