Search Results

Search found 10921 results on 437 pages for 'latex environment'.

Page 421/437 | < Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >

  • Data munging and data import scripting

    - by morpheous
    I need to write some scripts to carry out some tasks on my server (running Ubuntu server 8.04 TLS). The tasks are to be run periodically, so I will be running the scripts as cron jobs. I have divided the tasks into "group A" and "group B" - because (in my mind at least), they are a bit different. Task Group A import data from a file and possibly reformat it - by reformatting, I mean doing things like santizing the data, possibly normalizing it and or running calculations on 'columns' of the data Import the munged data into a database. For now, I am mostly using mySQL for the vast majority of imports - although some files will be imported into a sqlLite database. Note: The files will be mostly text files, although some of the files are in a binary format (my own proprietary format, written by a C++ application I developed). Task Group B Extract data from the database Perform calculations on the data and either insert or update tables in the database. My coding experience is is primarily as a C/C++ developer, although I have been using PHP as well for the last 2 years or so. I am from a windows background so I am still finding my feet in the linux environment. My question is this - I need to write scripts to perform the tasks I described above. Although I suppose I could write a few C++ applications to be used in the shell scripts, I think it may be better to write them in a scripting language (maybe this is a flawed assumption?). My thinking is that it would be easier to modify thins in a script - no need to rebuild etc for changes to functionality. Additionally, C++ data munging in C++ tends to involve more lines of code than "natural" scripting languages such as Perl, Python etc. Assuming that the majority of people on here agree that scripting is the way to go, herein lies my dilema. Which scripting language to use to perform the tasks above (giving my background). My gut instinct tells me that Perl (shudder) would be the most obvious choice for performing all of the above tasks. BUT (and that is a big BUT). The mere mention of Perl makes my toes curl, as I had a very, very bag experience with it a while back. The syntax seems quite unnatural to me - despite how many times I have tried to learn it - so if possible, I would really like to give it a miss. PHP (which I already know), also am not sure is a good candidate for scripting on the CLI (I have not seen many examples on how to do this etc - so I may be wrong). The last thing I must mention is that IF I have to learn a new language in order to do this, I cannot afford (time constraint) to spend more than a day, in learning the key commands/features required in order to do this (I can always learn the details of the language later, once I have actually deployed the scripts). So, which scripting language would you recommend (PHP, Python, Perl, [insert your favorite here]) - and most importantly WHY?. Or, should I just stick to writing little C++ applications that I call in a shell script?. Lastly, if you have suggested a scripting language, can you please show with a FEW lines (Perl mongers - I'm looking in your direction [nothing to cryptic!] ;) ) how I can use the language you suggested to do what I want to do. Hopefully, the lines you present will convince me that it can be done easily and elegantly in the language you suggested.

    Read the article

  • Write to file depending on minSdkVersion - android

    - by Simon Rosenqvist
    Hi, I have written a filewriter for my android application. It is to function on a Galaxy Tab, so my minSdkVersion has to be at least 4, so it will fill the screen. I originally started out with SdkVersion = 2 and at that point my filewriter worked perfectly. Changing the SdkVersion to 4 introduced the problem. My filewriter doesn't work anymore! The application runs fine, but a file doesn't get created. My .java file looks like this: public class HelloAndroid extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); TextView tv = new TextView(this); tv.setText("Hello, Android"); setContentView(R.layout.main); //definerer en knap kaldet button1 og sætter en listener på denne. Button button1 = (Button)findViewById(R.id.btnClickMe); button1.setOnClickListener(btnListener); //definerer en knap kaldet button2 og sætter en listener på denne. Button button2 = (Button)findViewById(R.id.btnClickMe2); button2.setOnClickListener(btnListener2); } //en variabel af typen 'long' deklæres og kaldes tid1. public long time1; private OnClickListener btnListener = new OnClickListener() { public void onClick(View v) { //Når der klikkes på button1 gemmes et tal i variablen tid1. time1 = System.currentTimeMillis(); } }; //en variabel af typen 'long' deklæres og kaldes tid2. public long time2; // en variabel af typen 'string' deklæres og kaldes tid: public String string1 = "time:"; private OnClickListener btnListener2 = new OnClickListener() { public void onClick(View v) { //Når der klikkes på button2 gemmes et tal i variablen tid2. time2 = System.currentTimeMillis(); // Herefter oprettes en fil kaldet "file.txt". try{ File file = new File(Environment.getExternalStorageDirectory(), "file.txt"); file.createNewFile(); BufferedWriter writer = new BufferedWriter(new FileWriter(file,true)); //string1 og tid2-tid1 skrives til filen. tid2-tid1 giver den tid der går fra der er trykket på den ene knap til den anden i millisekunder. writer.write(string1 + "\t" + (time2-time1)); writer.newLine(); writer.flush(); writer.close(); } catch (IOException e) { e.printStackTrace(); } } }; } And my manifest.xml looks like this: <?xml version="1.0" encoding="utf-8"?> <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".HelloAndroid" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> Why does my filewriter not work with minSdkVersion 2? Do i have to make a new filewriter? or what to do? Sorry for the messy code, i'm quite new to programming :)

    Read the article

  • How best to modernize the 2002-era J2EE app?

    - by user331465
    I have this friend.... I have this friend who works on a java ee application (j2ee) application started in the early 2000's. Currently they add a feature here and there, but have a large codebase. Over the years the team has shrunk by 70%. [Yes, the "i have this friend is". It's me, attempting to humorously inject teenage high-school counselor shame into the mix] Java, Vintage 2002 The application uses EJB 2.1, struts 1.x, DAO's etc with straight jdbc calls (mixture of stored procedures and prepared statements). No ORM. For caching they use a mixture of OpenSymphony OSCache and a home-grown cache layer. Over the last few years, they have spent effort to modernize the UI using ajax techniques and libraries. This largely involves javascript libaries (jquery, yui, etc). Client Side On the client side, the lack of upgrade path from struts1 to struts2 discouraged them from migrating to struts2. Other web frameworks became popular (wicket, spring , jsf). Struts2 was not the "clear winner". Migrating all the existing UI from Struts1 to Struts2/wicket/etc did not seem to present much marginal benefit at a very high cost. They did not want to have a patchwork of technologies-du-jour (subsystem X in Struts2, subsystem Y in Wicket, etc.) so developer write new features using Struts 1. Server Side On the server side, they looked into moving to ejb 3, but never had a big impetus. The developers are all comfortable with ejb-jar.xml, EJBHome, EJBRemote, that "ejb 2.1 as is" represented the path of least resistance. One big complaint about the ejb environment: programmers still pretend "ejb server runs in separate jvm than servlet engine". No app server (jboss/weblogic) has ever enforced this separation. The team has never deployed the ejb server on a separate box then the app server. The ear file contains multiple copies of the same jar file; one for the 'web layer' (foo.war/WEB-INF/lib) and one for the server side (foo.ear/). The app server only loads one jar. The duplications makes for ambiguity. Caching As for caching, they use several cache implementations: OpenSymphony cache and a homegrown cache. Jgroups provides clustering support Now What? The question: The team currently has spare cycles to to invest in modernizing the application? Where would the smart investor spend them? The main criteria: 1) productivity gains. Specifically reducing the time to develope new subsystems features and reduced maintenance. 2) performance/scalability. They do not care about fashion or techno-du-jour street cred. What do you all recommend? On the persistence side Switch everything (or new development only) to JPA/JPA2? Straight hibernate? Wait for Java EE 6? On the client/web-framework side: Migrate (some or all) to struts2? wicket? jsf/jsf2? As for caching: terracotta? ehcache? coherence? stick with what they have? how best to take advantage of the huge heap sizes that the 64-bit jvms offer? Thanks in advance.

    Read the article

  • Data adapter not filling my dataset

    - by Doug Ancil
    I have the following code: Imports System.Data.SqlClient Public Class Main Protected WithEvents DataGridView1 As DataGridView Dim instForm2 As New Exceptions Private Sub Button1_Click_1(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles startpayrollButton.Click Dim ssql As String = "select MAX(payrolldate) AS [payrolldate], " & _ "dateadd(dd, ((datediff(dd, '17530107', MAX(payrolldate))/7)*7)+7, '17530107') AS [Sunday]" & _ "from dbo.payroll" & _ " where payrollran = 'no'" Dim oCmd As System.Data.SqlClient.SqlCommand Dim oDr As System.Data.SqlClient.SqlDataReader oCmd = New System.Data.SqlClient.SqlCommand Try With oCmd .Connection = New System.Data.SqlClient.SqlConnection("Initial Catalog=mdr;Data Source=xxxxx;uid=xxxxx;password=xxxxx") .Connection.Open() .CommandType = CommandType.Text .CommandText = ssql oDr = .ExecuteReader() End With If oDr.Read Then payperiodstartdate = oDr.GetDateTime(1) payperiodenddate = payperiodstartdate.AddSeconds(604799) Dim ButtonDialogResult As DialogResult ButtonDialogResult = MessageBox.Show(" The Next Payroll Start Date is: " & payperiodstartdate.ToString() & System.Environment.NewLine & " Through End Date: " & payperiodenddate.ToString()) If ButtonDialogResult = Windows.Forms.DialogResult.OK Then exceptionsButton.Enabled = True startpayrollButton.Enabled = False End If End If oDr.Close() oCmd.Connection.Close() Catch ex As Exception MessageBox.Show(ex.Message) oCmd.Connection.Close() End Try End Sub Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles exceptionsButton.Click Dim connection As System.Data.SqlClient.SqlConnection Dim adapter As System.Data.SqlClient.SqlDataAdapter = New System.Data.SqlClient.SqlDataAdapter Dim connectionString As String = "Initial Catalog=mdr;Data Source=xxxxx;uid=xxxxx;password=xxxxx" Dim ds As New DataSet Dim _sql As String = "SELECT [Exceptions].Employeenumber,[Exceptions].exceptiondate, [Exceptions].starttime, [exceptions].endtime, [Exceptions].code, datediff(minute, starttime, endtime) as duration INTO scratchpad3" & _ " FROM Employees INNER JOIN Exceptions ON [Exceptions].EmployeeNumber = [Exceptions].Employeenumber" & _ " where [Exceptions].exceptiondate between @payperiodstartdate and @payperiodenddate" & _ " GROUP BY [Exceptions].Employeenumber, [Exceptions].Exceptiondate, [Exceptions].starttime, [exceptions].endtime," & _ " [Exceptions].code, [Exceptions].exceptiondate" connection = New SqlConnection(connectionString) connection.Open() Dim _CMD As SqlCommand = New SqlCommand(_sql, connection) _CMD.Parameters.AddWithValue("@payperiodstartdate", payperiodstartdate) _CMD.Parameters.AddWithValue("@payperiodenddate", payperiodenddate) adapter.SelectCommand = _CMD Try adapter.Fill(ds) If ds Is Nothing OrElse ds.Tables.Count = 0 OrElse ds.Tables(0).Rows.Count = 0 Then 'it's empty MessageBox.Show("There was no data for this time period. Press Ok to continue", "No Data") connection.Close() Exceptions.saveButton.Enabled = False Exceptions.Hide() Else connection.Close() End If Catch ex As Exception MessageBox.Show(ex.ToString) connection.Close() End Try Exceptions.Show() End Sub Private Sub payrollButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles payrollButton.Click Payrollfinal.Show() End Sub End Class and when I run my program and press this button Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles exceptionsButton.Click I have my date range within a time that I know that my dataset should produce a result, but when I put a line break in my code here: adapter.Fill(ds) and look at it in debug, I show a table value of 0. If I run the same query that I have to produce these results in sql analyser, I see 1 result. Can someone see why my query on my form produces a different result than the sql analyser does? Also here is my schema for my two tables: Exceptions employeenumber varchar no 50 yes no no SQL_Latin1_General_CP1_CI_AS exceptiondate datetime no 8 yes (n/a) (n/a) NULL starttime datetime no 8 yes (n/a) (n/a) NULL endtime datetime no 8 yes (n/a) (n/a) NULL duration varchar no 50 yes no no SQL_Latin1_General_CP1_CI_AS code varchar no 50 yes no no SQL_Latin1_General_CP1_CI_AS approvedby varchar no 50 yes no no SQL_Latin1_General_CP1_CI_AS approved varchar no 50 yes no no SQL_Latin1_General_CP1_CI_AS time timestamp no 8 yes (n/a) (n/a) NULL employees employeenumber varchar no 50 no no no SQL_Latin1_General_CP1_CI_AS name varchar no 50 no no no SQL_Latin1_General_CP1_CI_AS initials varchar no 50 no no no SQL_Latin1_General_CP1_CI_AS loginname1 varchar no 50 yes no no SQL_Latin1_General_CP1_CI_AS

    Read the article

  • Parse filename, insert to SQL

    - by jakesankey
    Thanks to Code Poet, I am now working off of this code to parse all .txt files in a directory and store them in a database. I need a bit more help though... The file names are R303717COMP_148A2075_20100520.txt (the middle section is unique per file). I would like to add something to code so that it can parse out the R303717COMP and put that in the left column of the database such as: (this is not the only R number we have) R303717COMP data data data R303717COMP data data data R303717COMP data data data etc Lastly, I would like to have it store each full file name into another table that gets checked so that it doesn't get processed twice.. Any Help is appreciated. using System; using System.Data; using System.Data.SQLite; using System.IO; namespace CSVImport { internal class Program { private static void Main(string[] args) { using (SQLiteConnection con = new SQLiteConnection("data source=data.db3")) { if (!File.Exists("data.db3")) { con.Open(); using (SQLiteCommand cmd = con.CreateCommand()) { cmd.CommandText = @" CREATE TABLE [Import] ( [RowId] integer PRIMARY KEY AUTOINCREMENT NOT NULL, [FeatType] varchar, [FeatName] varchar, [Value] varchar, [Actual] decimal, [Nominal] decimal, [Dev] decimal, [TolMin] decimal, [TolPlus] decimal, [OutOfTol] decimal, [Comment] nvarchar);"; cmd.ExecuteNonQuery(); } con.Close(); } con.Open(); using (SQLiteCommand insertCommand = con.CreateCommand()) { insertCommand.CommandText = @" INSERT INTO Import (FeatType, FeatName, Value, Actual, Nominal, Dev, TolMin, TolPlus, OutOfTol, Comment) VALUES (@FeatType, @FeatName, @Value, @Actual, @Nominal, @Dev, @TolMin, @TolPlus, @OutOfTol, @Comment);"; insertCommand.Parameters.Add(new SQLiteParameter("@FeatType", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@FeatName", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@Value", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@Actual", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Nominal", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Dev", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@TolMin", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@TolPlus", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@OutOfTol", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Comment", DbType.String)); string[] files = Directory.GetFiles(Environment.CurrentDirectory, "TextFile*.*"); foreach (string file in files) { string[] lines = File.ReadAllLines(file); bool parse = false; foreach (string tmpLine in lines) { string line = tmpLine.Trim(); if (!parse && line.StartsWith("Feat. Type,")) { parse = true; continue; } if (!parse || string.IsNullOrEmpty(line)) { continue; } foreach (SQLiteParameter parameter in insertCommand.Parameters) { parameter.Value = null; } string[] values = line.Split(new[] {','}); for (int i = 0; i < values.Length - 1; i++) { SQLiteParameter param = insertCommand.Parameters[i]; if (param.DbType == DbType.Decimal) { decimal value; param.Value = decimal.TryParse(values[i], out value) ? value : 0; } else { param.Value = values[i]; } } insertCommand.ExecuteNonQuery(); } } } con.Close(); } } } }

    Read the article

  • Migrate Spring JPA DAO unit testing to google app engine

    - by twingocerise
    I'm trying to put together a simple environment where I can get Spring, Maven, JPA, Google App Engine and DAO unit testing working happily all together. The goal is to be able to run a simple DAO unit test creating an entity and then load it again with a simple find to check it's been created properly - all of this from my maven build. My dao is making use of the JPA entity manager (query(), persist(), etc.) I've got it working no problem with hsqldb and a datasource, etc. but I'm struggling to get it working with appengine. My questions are: 1) I'm using an entity manager, injecting my persistence unit as followed. Is it OK? Is there any need for a datasource or something special? I thought not but correct me if I'm wrong. applicationContext.xml <bean id='entityManagerFactory' class='org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean'> <property name="persistenceUnitName" value="transactions-optional" /> </bean> Persistence.xml <persistence-unit name="transactions-optional"> <provider>org.datanucleus.store.appengine.jpa.DatastorePersistenceProvider</provider> <properties> <property name="datanucleus.NontransactionalRead" value="true"/> <property name="datanucleus.NontransactionalWrite" value="true"/> <property name="datanucleus.ConnectionURL" value="appengine"/> </properties> </persistence-unit> 2) what are the dependencies I need to add to my pom file to be able to run the unit test making use of the entityManager? What about versions ? I found loads of things about appengine-api-labs/stubs/testing but none them got it working i.e. I'm getting jdo dependency missing while I'm using JPA... I also get loads of conflicts when I try to add some jars (datanucleus and stuff). So far I'm trying appengine-api-1.0-sdk v1.7.0 - ASM-all v3.3 - datanucleus core/api-jpa/enhancer v3.1.0 - datanucleus-appengine v2.0.1.1 and all the gae testing jars v1.7.0 3) Is there anything I need to add to my surefire plugin (test runner) to make sure it picks up all the dependencies? I'm getting an exhausting ClassNotFound on DatastorePersistenceProvider while it is in my classpath (I checked the jars and the mvn dependency:tree) I had a look at this but it doesn't seem to be working at all: http://www.vertigrated.com/blog/2011/02/working-maven-3-google-app-engine-plugin-with-gwt-support/ 4) Do I need to use any sot of localhelper to test my DAOs? Ideally I'd want to test my dao layer "as is" with the entity manager... what's your opinion ? Has anyone managed to run a unit test using JPA on google app engine ? 5) Do I need to set up any sort of gae.home somewhere in my pom file? Would anyone make use of it (a plugin or something) ? 6) Is the gwt-maven plugin any helpful if I don't use gwt - I'm writing a simple webservice making use of appengine, not a GWT app... Any help would be much appreciated as I've been struggling for 2 days now... Cheers, V.

    Read the article

  • Using Maven to Deploy to Weblogic Clusters

    - by Mark Sailes
    org.codehaus.mojo weblogic-maven-plugin 2.9.1 We're currently using the weblogic maven plugin successfully to deploy to our local WebLogic 9.2 instances. When we try to deploy to a remote environment we have a problem. We use a two machine cluster, with the admin server and managed server on one machine, and another managed server on a seperate machine. When your plugin uploads the application to the admin server, it doesn't copy it to the second managed server on the seperate machine. This then causes the second managed server a problem, as it cannot find the application in the location where the admin server saved it on its own machine. Config below <configuration> <adminServerHostName>${weblogic.adminServerHostName}</adminServerHostName> <adminServerPort>${weblogic.adminServerPort}</adminServerPort> <adminServerProtocol>${weblogic.adminServerProtocol}</adminServerProtocol> <userId>${weblogic.userId}</userId> <password>${weblogic.password}</password> <upload>${weblogic.upload}</upload> <remote>${weblogic.remote}</remote> <verbose>${weblogic.verbose}</verbose> <debug>${weblogic.debug}</debug> <stage>${weblogic.stage}</stage> <targetNames>${weblogic.targetNames}</targetNames> <exploded>${weblogic.exploded}</exploded> </configuration> <profile> <id>localhost</id> <properties> <weblogic.adminServerHostName>localhost</weblogic.adminServerHostName> <weblogic.adminServerPort>7001</weblogic.adminServerPort> <weblogic.adminServerProtocol>t3</weblogic.adminServerProtocol> <weblogic.userId>weblogic</weblogic.userId> <weblogic.password>weblogic</weblogic.password> <weblogic.upload>false</weblogic.upload> <weblogic.remote>false</weblogic.remote> <weblogic.verbose>true</weblogic.verbose> <weblogic.debug>true</weblogic.debug> <weblogic.stage>false</weblogic.stage> <weblogic.targetNames>AdminServer</weblogic.targetNames> <weblogic.exploded>false</weblogic.exploded> </properties> </profile> <profile> <id>dev</id> <properties> <weblogic.adminServerHostName>******</weblogic.adminServerHostName> <weblogic.adminServerPort>9141</weblogic.adminServerPort> <weblogic.adminServerProtocol>t3</weblogic.adminServerProtocol> <weblogic.userId>******</weblogic.userId> <weblogic.password>******</weblogic.password> <weblogic.upload>true</weblogic.upload> <weblogic.remote>true</weblogic.remote> <weblogic.verbose>true</weblogic.verbose> <weblogic.debug>true</weblogic.debug> <weblogic.stage>true</weblogic.stage> <weblogic.targetNames>dev_cluster01</weblogic.targetNames> <weblogic.exploded>false</weblogic.exploded> </properties> </profile>

    Read the article

  • What version-control system is most trivial to set up and use for toy projects?

    - by Norman Ramsey
    I teach the third required intro course in a CS department. One of my homework assignments asks students to speed up code they have written for a previous assignment. Factor-of-ten speedups are routine; factors of 100 or 1000 are not unheard of. (For a factor of 1000 speedup you have to have made rookie mistakes with malloc().) Programs are improved by a sequence is small changes. I ask students to record and describe each change and the resulting improvement. While you're improving a program it is also possible to break it. Wouldn't it be nice to back out? You can see where I'm going with this: my students would benefit enormously from version control. But there are some caveats: Our computing environment is locked down. Anything that depends on a central repository is suspect. Our students are incredibly overloaded. Not just classes but jobs, sports, music, you name it. For them to use a new tool it has to be incredibly easy and have obvious benefits. Our students do most work in pairs. Getting bits back and forth between accounts is problematic. Could this problem also be solved by distributed version control? Complexity is the enemy. I know setting up a CVS repository is too baffling---I myself still have trouble because I only do it once a year. I'm told SVN is even harder. Here are my comments on existing systems: I think central version control (CVS or SVN) is ruled out because our students don't have the administrative privileges needed to make a repository that they can share with one other student. (We are stuck with Unix file permissions.) Also, setup on CVS or SVN is too hard. darcs is way easy to set up, but it's not obvious how you share things. darcs send (to send patches by email) seems promising but it's not clear how to set it up. The introductory documentation for git is not for beginners. Like CVS setup, it's something I myself have trouble with. I'm soliciting suggestions for what source-control to use with beginning students. I suspect we can find resources to put a thin veneer over an existing system and to simplify existing documentation. We probably don't have resources to write new documentation. So, what's really easy to setup, commit, revert, and share changes with a partner but does not have to be easy to merge or to work at scale? A key constraint is that programming pairs have to be able to share work with each other and only each other, and pairs change every week. Our infrastructure is Linux, Solaris, and Windows with a netapp filer. I doubt my IT staff wants to create a Unix group for each pair of students. Is there an easier solution I've overlooked? (Thanks for the accepted answer, which beats the others on account of its excellent reference to Git Magic as well as the helpful comments.)

    Read the article

  • Java, LDAP: Make it not ignore blank passwords?

    - by Steve
    I'm maintaining some legacy Java LDAP code. I know next to nothing about LDAP. The program below basically just sends the userid and password to the LDAP server, receives notification back if the credentials are good. If so, it prints out the LDAP attributes received from the LDAP server, if not it prints out an exception. All works well if a bad password is given. An "invalid credentials" exception gets thrown. However, if a blank password is sent to the LDAP Server, authentication will still happen, LDAP attributes will still be returned. Is this unhappy situation due to the LDAP server allowing blank passwords, or does the code below need to be adjusted such a blank password will get fed to the LDAP server in such a way so it will get rejected? I do have data validation in place. I took it off in a testing environment to solve another issue and noticed this problem. I would prefer not to have this problem underneath the data validation. Thanks much in advance for any information import javax.naming.*; import javax.naming.directory.*; import java.util.*; import java.sql.*; public class LDAPTEST { public static void main(String args[]) { String lcf = "com.sun.jndi.ldap.LdapCtxFactory"; String ldapurl = "ldaps://ldap-cit.smew.acme.com:636/o=acme.com"; String loginid = "George.Jetson"; String password = ""; DirContext ctx = null; Hashtable env = new Hashtable(); Attributes attr = null; Attributes resultsAttrs = null; SearchResult result = null; NamingEnumeration results = null; int iResults = 0; int iAttributes = 0; env.put(Context.INITIAL_CONTEXT_FACTORY, lcf); env.put(Context.PROVIDER_URL, ldapurl); env.put(Context.SECURITY_PROTOCOL, "ssl"); env.put(Context.SECURITY_AUTHENTICATION, "simple"); env.put(Context.SECURITY_PRINCIPAL, "uid=" + loginid + ",ou=People,o=acme.com"); env.put(Context.SECURITY_CREDENTIALS, password); try { ctx = new InitialDirContext(env); attr = new BasicAttributes(true); attr.put(new BasicAttribute("uid",loginid)); results = ctx.search("ou=People",attr); while (results.hasMore()) { result = (SearchResult)results.next(); resultsAttrs = result.getAttributes(); for (NamingEnumeration enumAttributes = resultsAttrs.getAll(); enumAttributes.hasMore();) { Attribute a = (Attribute)enumAttributes.next(); System.out.println("attribute: " + a.getID() + " : " + a.get().toString()); iAttributes++; }// end for loop iResults++; }// end while loop System.out.println("Records == " + iResults + " Attributes: " + iAttributes); }// end try catch (Exception e) { e.printStackTrace(); } }// end function main() }// end class LDAPTEST

    Read the article

  • Dot Net Nuke module works in "Edit" mode but not for "View": cache problem?

    - by Godeke
    I have a DNN task that simply runs some Javascript to compute a price based on a few input fields. This module works fine on our production site, but we had a company do a skin for us to improve the look of the site and the module fails under this new system. (DNN 05.06.00 (459) although it was 5.5 prior... I updated in a futile hope that it was a bug in the old revision.) What is incredibly odd about this is that the module works fine when I'm logged in to DNN and using the Edit mode as an administrator. In this case the small snippet of JavaScript loads fine and filling the fields results in a price. On the other hand it I click "View" (or more importantly, if I'm not logged in at all) the page loads a cached copy. Even odder, I have found the cache files in \Portals\2\Cache\Pages are generated and then only the cached data is being used. When the cached copy is loaded, the JavaScript doesn't appear (it is normally created via a Page.ClientScript.RegisterClientScriptBlock(). Additionally, the button which posts the data to the server doesn't execute any of the server side code (confirmed with a debugger) but instead just reloads the cached copy. If I manually delete the files in \Portals\2\Cache\Pages then everything works properly, but I have to do so after every page load: failing to do so simply loads the page as it was last generated repeatedly. Resetting the application (either via the UI or editing web.config) doesn't change this and clearing the cache from the Host Settings page doesn't actually clear these cached pages. I'm guessing that Edit mode bypasses the cache in some way, but I have gone as far as turning off all caching on the site (which is horrible for performance) and the cached version is still loaded. Has anyone seen anything like this? Shouldn't clearing the cache clear the files (I'm using the File provider for caching)? Shouldn't even a cached page go back to the server if the user posts back? EDIT: I should point out that permissions don't appear to be a problem on the cache directory... other pages cached output are deleted from this folder, just this page has this issue. EDIT 2: Clarifying some settings and conditions which I didn't provide. First, this module works fine in production under DNN 5.6.0. In our test environment with the consulting company's changes it fails (the changes are skin and page layout only in theory: the module source itself verifies as unchanged). All cache settings and the like have been verified the same between the two and we only resorted to setting the module cache to 0 and -1 (and disabling the test site's cache entirely) when we couldn't find another cause for the problem. I have watched the cache work correctly on many other pages in test: there is something about this page that is causing the problem. We have punted and are creating an installable skin based on the consultant's work as I suspect they have somehow corrupted the DNN install (database side I think).

    Read the article

  • Accessing Layout Items from inside Widget AppWidgetProvider

    - by cam4mav
    I am starting to go insane trying to figure this out. It seems like it should be very easy, I'm starting to wonder if it's possible. What I am trying to do is create a home screen widget, that only contains an ImageButton. When it is pressed, the idea is to change some setting (like the wi-fi toggle) and then change the Buttons image. I have the ImageButton declared like this in my main.xml <ImageButton android:id="@+id/buttonOne" android:src="@drawable/button_normal_ringer" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" /> my AppWidgetProvider class, named ButtonWidget * note that the RemoteViews class is a locally stored variable. this allowed me to get access to the RViews layout elements... or so I thought. @Override public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) { remoteViews = new RemoteViews(context.getPackageName(), R.layout.main); Intent active = new Intent(context, ButtonWidget.class); active.setAction(VIBRATE_UPDATE); active.putExtra("msg","TESTING"); PendingIntent actionPendingIntent = PendingIntent.getBroadcast(context, 0, active, 0); remoteViews.setOnClickPendingIntent(R.id.buttonOne, actionPendingIntent); appWidgetManager.updateAppWidget(appWidgetIds, remoteViews); } @Override public void onReceive(Context context, Intent intent) { // v1.5 fix that doesn't call onDelete Action final String action = intent.getAction(); Log.d("onReceive",action); if (AppWidgetManager.ACTION_APPWIDGET_DELETED.equals(action)) { final int appWidgetId = intent.getExtras().getInt( AppWidgetManager.EXTRA_APPWIDGET_ID, AppWidgetManager.INVALID_APPWIDGET_ID); if (appWidgetId != AppWidgetManager.INVALID_APPWIDGET_ID) { this.onDeleted(context, new int[] { appWidgetId }); } } else { // check, if our Action was called if (intent.getAction().equals(VIBRATE_UPDATE)) { String msg = "null"; try { msg = intent.getStringExtra("msg"); } catch (NullPointerException e) { Log.e("Error", "msg = null"); } Log.d("onReceive",msg); if(remoteViews != null){ Log.d("onReceive",""+remoteViews.getLayoutId()); remoteViews.setImageViewResource(R.id.buttonOne, R.drawable.button_pressed_ringer); Log.d("onReceive", "tried to switch"); } else{ Log.d("F!", "--naughty language used here!!!--"); } } super.onReceive(context, intent); } } so, I've been testing this and the onReceive method works great, I'm able to send notifications and all sorts of stuff (removed from code for ease of reading) the one thing I can't do is change any properties of the view elements. To try and fix this, I made RemoteViews a local and static private variable. Using log's I was able to see that When multiple instances of the app are on screen, they all refer to the one instance of RemoteViews. perfect for what I'm trying to do The trouble is in trying to change the image of the ImageButton. I can do this from within the onUpdate method using this. remoteViews.setImageViewResource(R.id.buttonOne, R.drawable.button_pressed_ringer); that doesn't do me any good though once the widget is created. For some reason, even though its inside the same class, being inside the onReceive method makes that line not work. That line used to throw a Null pointer as a matter of fact, until I changed the variable to static. now it passes the null test, refers to the same layoutId as it did at the start, reads the line, but it does nothing. Its like the code isn't even there, just keeps chugging along. SO...... Is there any way to modify layout elements from within a widget after the widget has been created!? I want to do this based on the environment, not with a configuration activity launch. I've been looking at various questions and this seems to be an issue that really hasn't been solved, such as link text and link text oh and for anyone who finds this and wants a good starting tutorial for widgets, this is easy to follow (though a bit old, it gets you comfortable with widgets) .pdf link text hopefully someone can help here. I kinda have the feeling that this is illegal and there is a different way to go about this. I would LOVE to be told another approach!!!! Thanks

    Read the article

  • clock and date showing on a live site but not on localhost

    - by grumpypanda
    I've got clock.swf and date.swf working fine on a live site, now I am using the same code to set up a local develop environment. Everything is working well except the clock.swf and date.swf stopped working on localhost. Two same yellow errors "You need to update your Flash plugin. Click here if you want to continue." but of course my Flash player is up to date since the live site is working fine. I'll post the code below which I think has caused the error. I've been searching online for the last couple of hours but no luck, anyone has got into an issue like this before? What can be the possible cause? Any help is appreciated. This is on the index.php, I can post more code here if needed. <?php embed_flash("swf/clock.swf", CLOCK_WIDTH, CLOCK_HEIGHT, "8", '', "flashcontent");?> <?php embed_flash("swf/date.swf", DATE_WIDTH, DATE_HEIGHT, "8", '', "flashcontent_date");?> configure.php define('CLOCK_WIDTH', '450'); define('CLOCK_HEIGHT', ''); define('DATE_WIDTH', '440'); define('DATE_HEIGHT', ''); flash_function.php <?php function embed_flash($name, $w, $h, $version, $bgcolor, $id) { $cacheBuster = rand(); $padTop = $h/3; ?> <style> a.noflash:link, a.noflash:visited, a.noflash:active {color: #1860C2; text-decoration: none; background:#FFFFFF;} a.noflash:hover {color:#000; text-decoration:none; background:#EEEEEE;} .message { width: <?=$w;?>px; font-size:12px; font-weight:normal; margin-bottom: 10px; padding: 5px; color: #EEE; background: orange;"} </style> <div id="<?=$id; ?>" align="center"> <noscript> <div class="message"> Please enable <a href="https://www.google.com/support/adsense/bin/answer.py?answer=12654" target="_blank" class="noflash">&nbsp;JavaScript&nbsp;</a> to view this page properly. </div> </noscript> <div class="message"> You need to update your Flash plugin. Click <a href="http://www.adobe.com/shockwave/download/download.cgi?P1_Prod_Version=ShockwaveFlash&promoid=BIOW" target="_blank" class="noflash">&nbsp;here&nbsp;</a> if you want to continue. </div> </div> <script type="text/javascript"> // <![CDATA[ var so = new SWFObject("<?=$name;?>", "", "<?=$w;?>", "<?=$h;?>", "<?=$version;?>", "<?=$bgcolor;?>"); so.addParam("quality", "high"); so.addParam("allowScriptAccess", "sameDomain"); so.addParam("scale", "showall"); so.addParam("loop", "false"); so.addParam("wmode", "transparent"); so.write("<?=$id;?>"); // ]]> </script>

    Read the article

  • How do I detect a file write error in C?

    - by rich
    I have an embedded environment where a user might insert or remove a USB flash drive. I would like to know if the drive has been removed, or if there is some other problem when I try to write to the drive. However, Linux just saves the information in its buffers and returns with no indicated error. The computer I'm using comes with a 2.4.26 kernel and libc 2.3.2. I'm mounting the drive this way: i = mount(MEMORY_DEV_PATH, MEMORY_MNT_PATH, "vfat", MS_SYNCHRONOUS, NULL); That works: 50:/root # mount /dev/scsi/host0/bus0/target0/lun0/part1 on /mem type vfat (rw,sync) 50:/root # Later, I try to copy a file to it: int ifile, ofile; ifile = open("/tmp/tmpmidi.mid", O_RDONLY); if (ifile < 0) { perror("open in"); break; } ofile = open(current_file_name.c_str(), O_WRONLY | O_SYNC); if (ofile < 0) { perror("open out"); break; } #define BUFSZ 256 char buffer[BUFSZ]; while (1) { i = read(ifile, buffer, BUFSZ); if (i < 0) { perror("read"); break; } j = write(ofile, buffer, i); if (j < 0) { perror("write"); break; } if (i != j) { perror("Sizes wrong"); break; } if (i < BUFSZ) { printf("Copy is finished, I hope\n"); close(ifile); close(ofile); break; } } If this snippet of code is executed with a write-protected USB memory, the result is Copy is finished, I hope amid a flurry of error messages from the kernel on the console. I believe the same thing would happen if I simply removed the USB drive (without unmounting it). I have also fiddled with devfs. I figured out how to get it to automatically mount the drive, (with the REGISTER event) but it never seems to trigger the UNREGISTER when I pull out the memory. How can I determine in my program whether I have successfully created a file? Update 4 July: It was a silly oversight of me not to check the result from close(). Unfortunately, the file can be closed without error. So that didn't help. What about fsync()? That sounds like a good idea, but that didn't catch the error either. There might be some interesting information in /sys if I had such a thing. I believe that didn't get added until 2.6.?. The comment(s) about the quality of my flash drive are probably justified. It's one of the earlier ones. In fact, write protect switches seem to be extremely rare these days. I think I have to use the overkill option: Create a file, unmount & remount the drive, and check to see if the file is there. If that doesn't solve my problem, then something is really messed up! Note to myself: Make sure the file you try to create isn't already there! By the way, this does happen to be a C++ program. You can tell by the .c_str() which I had intended to edit out for simplicity.

    Read the article

  • WCF/REST Get image into picturebox?

    - by Garrith
    So I have wcf rest service which succesfuly runs from a console app, if I navigate to: http://localhost:8000/Service/picture/300/400 my image is displayed note the 300/400 sets the width and height of the image within the body of the html page. The code looks like this: namespace WcfServiceLibrary1 { [ServiceContract] public interface IReceiveData { [OperationContract] [WebInvoke(Method = "GET", BodyStyle = WebMessageBodyStyle.Wrapped, ResponseFormat = WebMessageFormat.Xml, UriTemplate = "picture/{width}/{height}")] Stream GetImage(string width, string height); } public class RawDataService : IReceiveData { public Stream GetImage(string width, string height) { int w, h; if (!Int32.TryParse(width, out w)) { w = 640; } // Handle error if (!Int32.TryParse(height, out h)) { h = 400; } Bitmap bitmap = new Bitmap(w, h); for (int i = 0; i < bitmap.Width; i++) { for (int j = 0; j < bitmap.Height; j++) { bitmap.SetPixel(i, j, (Math.Abs(i - j) < 2) ? Color.Blue : Color.Yellow); } } MemoryStream ms = new MemoryStream(); bitmap.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg); ms.Position = 0; WebOperationContext.Current.OutgoingResponse.ContentType = "image/jpeg"; return ms; } } } What I want to do now is use a client application "my windows form app" and add that image into a picturebox. Im abit stuck as to how this can be achieved as I would like the width and height of the image from my wcf rest service to be set by the width and height of the picturebox. I have tryed this but on two of the lines have errors and im not even sure if it will work as the code for my wcf rest service seperates width and height with a "/" if you notice in the url. string uri = "http://localhost:8080/Service/picture"; private void button1_Click(object sender, EventArgs e) { StringBuilder sb = new StringBuilder(); sb.AppendLine("<picture>"); sb.AppendLine("<width>" + pictureBox1.Image.Width + "</width>"); // the url looks like this http://localhost:8080/Service/picture/300/400 when accessing the image so I am trying to set this here sb.AppendLine("<height>" + pictureBox1.Image.Height + "</height>"); sb.AppendLine("</picture>"); string picture = sb.ToString(); byte[] getimage = Encoding.UTF8.GetBytes(picture); // not sure this is right HttpWebRequest req = WebRequest.Create(uri); //cant convert webrequest to httpwebrequest req.Method = "GET"; req.ContentType = "image/jpg"; req.ContentLength = getimage.Length; MemoryStream reqStrm = req.GetRequestStream(); //cant convert IO stream to IO Memory stream reqStrm.Write(getimage, 0, getimage.Length); reqStrm.Close(); HttpWebResponse resp = req.GetResponse(); // cant convert web respone to httpwebresponse MessageBox.Show(resp.StatusDescription); pictureBox1.Image = Image.FromStream(reqStrm); reqStrm.Close(); resp.Close(); } So just wondering if some one could help me out with this futile attempt at adding a variable image size from my rest service to a picture box on button click. This is the host app aswell: namespace ConsoleApplication1 { class Program { static void Main(string[] args) { string baseAddress = "http://" + Environment.MachineName + ":8000/Service"; ServiceHost host = new ServiceHost(typeof(RawDataService), new Uri(baseAddress)); host.AddServiceEndpoint(typeof(IReceiveData), new WebHttpBinding(), "").Behaviors.Add(new WebHttpBehavior()); host.Open(); Console.WriteLine("Host opened"); Console.ReadLine();

    Read the article

  • Same source, multiple targets with different resources (Visual Studio .Net 2008)

    - by Mike Bell
    A set of software products differ only by their resource strings, binary resources, and by the strings / graphics / product keys used by their Visual Studio Setup projects. What is the best way to create, organize, and maintain them? i.e. All the products essentially consist of the same core functionality customized by graphics, strings, and other resource data to form each product. Imagine you are creating a set of products like "Excel for Bankers", Excel for Gardeners", "Excel for CEOs", etc. Each product has the the same functionality, but differs in name, graphics, help files, included templates etc. The environment in which these are being built is: vanilla Windows.Forms / Visual Studio 2008 / C# / .Net. The ideal solution would be easy to maintain. e.g. If I introduce a new string / new resource projects I haven't added the resource to should fail at compile time, not run time. (And subsequent localization of the products should also be feasible). Hopefully I've missed the blindingly-obvious and easy way of doing all this. What is it? ============ Clarification(s) ================ By "product" I mean the package of software that gets installed by the installer and sold to the end user. Currently I have one solution, consisting of multiple projects, (including a Setup project), which builds a set of assemblies and create a single installer. What I need to produce are multiple products/installers, all with similar functionality, which are built from the same set of assemblies but differ in the set of resources used by one of the assemblies. What's the best way of doing this? ------------ The 95% Solution ----------------- Based upon Daminen_the_unbeliever's answer, a resource file per configuration can be achieved as follows: Create a class library project ("Satellite"). Delete the default .cs file and add a folder ("Default") Create a resource file in the folder "MyResources" Properties - set CustomToolNamespace to something appropriate (e.g. "XXX") Make sure the access modifier for the resources is "Public". Add the resources. Edit the source code. Refer to the resources in your code as XXX.MyResources.ResourceName) Create Configurations for each product variant ("ConfigN") For each product variant, create a folder ("VariantN") Copy and Paste the MyResources file into each VariantN folder Unload the "Satellite" project, and edit the .csproj file For each "VariantN/MyResources" <Compile> or <EmbeddedResource> tag, add a Condition="'$(Configuration)' == 'ConfigN'" attribute. Save, Reload the .csproj, and you're done... This creates a per-configuration resource file, which can (presumably) be further localized. Compile error messages are produced for any configuration that where a a resource is missing. The resource files can be localized using the standard method (create a second resources file (MyResources.fr.resx) and edit .csproj as before). The reason this is a 95% solution is that resources used to initialize forms (e.g. Form Titles, button texts) can't be easily handled in the same manner - the easiest approach seems to be to overwrite these with values from the satellite assembly.

    Read the article

  • Please clarify how create/update happens against child entities of an aggregate root

    - by christian
    After much reading and thinking as I begin to get my head wrapped around DDD, I am a bit confused about the best practices for dealing with complex hierarchies under an aggregate root. I think this is a FAQ but after reading countless examples and discussions, no one is quite talking about the issue I'm seeing. If I am aligned with the DDD thinking, entities below the aggregate root should be immutable. This is the crux of my trouble, so if that isn't correct, that is why I'm lost. Here is a fabricated example...hope it holds enough water to discuss. Consider an automobile insurance policy (I'm not in insurance, but this matches the language I hear when on the phone w/ my insurance company). Policy is clearly an entity. Within the policy, let's say we have Auto. Auto, for the sake of this example, only exists within a policy (maybe you could transfer an Auto to another policy, so this is potential for an aggregate as well, which changes Policy...but assume it simpler than that for now). Since an Auto cannot exist without a Policy, I think it should be an Entity but not a root. So Policy in this case is an aggregate root. Now, to create a Policy, let's assume it has to have at least one auto. This is where I get frustrated. Assume Auto is fairly complex, including many fields and maybe a child for where it is garaged (a Location). If I understand correctly, a "create Policy" constructor/factory would have to take as input an Auto or be restricted via a builder to not be created without this Auto. And the Auto's creation, since it is an entity, can't be done beforehand (because it is immutable? maybe this is just an incorrect interpretation). So you don't get to say new Auto and then setX, setY, add(Z). If Auto is more than somewhat trivial, you end up having to build a huge hierarchy of builders and such to try to manage creating an Auto within the context of the Policy. One more twist to this is later, after the Policy is created and one wishes to add another Auto...or update an existing Auto. Clearly, the Policy controls this...fine...but Policy.addAuto() won't quite fly because one can't just pass in a new Auto (right!?). Examples say things like Policy.addAuto(VIN, make, model, etc.) but are all so simple that that looks reasonable. But if this factory method approach falls apart with too many parameters (the entire Auto interface, conceivably) I need a solution. From that point in my thinking, I'm realizing that having a transient reference to an entity is OK. So, maybe it is fine to have a entity created outside of its parent within the aggregate in a transient environment, so maybe it is OK to say something like: auto = AutoFactory.createAuto(); auto.setX auto.setY or if sticking to immutability, AutoBuilder.new().setX().setY().build() and then have it get sorted out when you say Policy.addAuto(auto) This insurance example gets more interesting if you add Events, such as an Accident with its PolicyReports or RepairEstimates...some value objects but most entities that are all really meaningless outside the policy...at least for my simple example. The lifecycle of Policy with its growing hierarchy over time seems the fundamental picture I must draw before really starting to dig in...and it is more the factory concept or how the child entities get built/attached to an aggregate root that I haven't seen a solid example of. I think I'm close. Hope this is clear and not just a repeat FAQ that has answers all over the place.

    Read the article

  • Which of CouchDB or MongoDB suits my needs?

    - by vonconrad
    Where I work, we use Ruby on Rails to create both backend and frontend applications. Usually, these applications interact with the same MySQL database. It works great for a majority of our data, but we have one situation which I would like to move to a NoSQL environment. We have clients, and our clients have what we call "inventories"--one or more of them. An inventory can have many thousands of items. This is currently done through two relational database tables, inventories and inventory_items. The problems start when two different inventories have different parameters: # Inventory item from inventory 1, televisions { inventory_id: 1 sku: 12345 name: Samsung LCD 40 inches model: 582903-4 brand: Samsung screen_size: 40 type: LCD price: 999.95 } # Inventory item from inventory 2, accomodation { inventory_id: 2 sku: 48cab23fa name: New York Hilton accomodation_type: hotel star_rating: 5 price_per_night: 395 } Since we obviously can't use brand or star_rating as the column name in inventory_items, our solution so far has been to use generic column names such as text_a, text_b, float_a, int_a, etc, and introduce a third table, inventory_schemas. The tables now look like this: # Inventory schema for inventory 1, televisions { inventory_id: 1 int_a: sku text_a: name text_b: model text_c: brand int_b: screen_size text_d: type float_a: price } # Inventory item from inventory 1, televisions { inventory_id: 1 int_a: 12345 text_a: Samsung LCD 40 inches text_b: 582903-4 text_c: Samsung int_a: 40 text_d: LCD float_a: 999.95 } This has worked well... up to a point. It's clunky, it's unintuitive and it lacks scalability. We have to devote resources to set up inventory schemas. Using separate tables is not an option. Enter NoSQL. With it, we could let each and every item have their own parameters and still store them together. From the research I've done, it certainly seems like a great alterative for this situation. Specifically, I've looked at CouchDB and MongoDB. Both look great. However, there are a few other bits and pieces we need to be able to do with our inventory: We need to be able to select items from only one (or several) inventories. We need to be able to filter items based on its parameters (eg. get all items from inventory 2 where type is 'hotel'). We need to be able to group items based on parameters (eg. get the lowest price from items in inventory 1 where brand is 'Samsung'). We need to (potentially) be able to retrieve thousands of items at a time. We need to be able to access the data from multiple applications; both backend (to process data) and frontend (to display data). Rapid bulk insertion is desired, though not required. Based on the structure, and the requirements, are either CouchDB or MongoDB suitable for us? If so, which one will be the best fit? Thanks for reading, and thanks in advance for answers. EDIT: One of the reasons I like CouchDB is that it would be possible for us in the frontend application to request data via JavaScript directly from the server after page load, and display the results without having to use any backend code whatsoever. This would lead to better page load and less server strain, as the fetching/processing of the data would be done client-side.

    Read the article

  • How to copy bytes from buffer into the managed struct?

    - by Chupo_cro
    I have a problem with getting the code to work in a managed environment (VS2008 C++/CLI Win Forms App). The problem is I cannot declare the unmanaged struct (is that even possible?) inside the managed code, so I've declared a managed struct but now I have a problem how to copy bytes from buffer into that struct. Here is the pure C++ code that obviously works as expected: typedef struct GPS_point { float point_unknown_1; float latitude; float longitude; float altitude; // x10000 float time; int point_unknown_2; int speed; // x100 int manually_logged_point; // flag (1 --> point logged manually) } track_point; int offset = 0; int filesize = 256; // simulates filesize int point_num = 10; // simulates number of records int main () { char *buffer_dyn = new char[filesize]; // allocate RAM // here, the file would have been read into the buffer buffer_dyn[0xa8] = 0x1e; // simulates the speed data (1e 00 00 00) buffer_dyn[0xa9] = 0x00; buffer_dyn[0xaa] = 0x00; buffer_dyn[0xab] = 0x00; offset = 0x90; // if the data with this offset is transfered trom buffer // to struct, int speed is alligned with the buffer at the // offset of 0xa8 track_point *points = new track_point[point_num]; points[0].speed = 0xff; // (debug) it should change into 0x1e memcpy(&points[0],buffer_dyn+offset,32); cout << "offset: " << offset << "\r\n"; //cout << "speed: " << points[0].speed << "\r\n"; printf ("speed : 0x%x\r\n",points[0].speed); printf("byte at offset 0xa8: 0x%x\r\n",(unsigned char)buffer_dyn[0xa8]); // should be 0x1e delete[] buffer_dyn; // release RAM delete[] points; /* What I need is to rewrite the lines 29 and 31 to work in the managed code (VS2008 Win Forms C++/CLI) What should I have after: array<track_point^>^ points = gcnew array<track_point^>(point_num); so I can copy 32 bytes from buffer_dyn to the managed struct declared as typedef ref struct GPS_point { float point_unknown_1; float latitude; float longitude; float altitude; // x10000 float time; int point_unknown_2; int speed; // x100 int manually_logged_point; // flag (1 --> point logged manually) } track_point; */ return 0; } Here is the paste to codepad.org so it can be seen the code is OK. What I need is to rewrite these two lines: track_point *points = new track_point[point_num]; memcpy(&points[0],buffer_dyn+offset,32); to something that will work in a managed application. I wrote: array<track_point^>^ points = gcnew array<track_point^>(point_num); and now trying to reproduce the described copying of the data from buffer over the struct, but haven't any idea how it should be done. Alternatively, if there is a way to use an unmanaged struct in the same way shown in my code, then I would like to avoid working with managed struct.

    Read the article

  • ActionScript 3 Cant see Movieclip

    - by user3697993
    When I play my game it does not show my _Player Movieclip, but it does collide with the ground which is very confusing. So I believe the movieclip is there but not showing the texture/Sprite. I think the problem is in "function Spawn" (First Function). public class PewdyBird extends MovieClip { //Player variables public var Up_Speed:int = 25; public var speed:Number = 0; public var _grav:Number = 0.5; public var isJump:Boolean = false; public var Score:int = 0; public var Player_Live:Boolean = true; public var _Player:Player = new Player(); //Other variables //Environment variables var Floor:int = 480; var Clock:Number = 0; var Clock_restart:Number = 0; var Clock_ON:Boolean = false; var Clock_max:int = 15; var Player_Stage:Boolean = true; private var _X:int; private var _Y:int; private var hit_ground:Boolean = false; private var width_BG:int = 479; //SPAWN function Spawn(e:Event){ _Player.x = 200; _Player.y = 200; stage.addChild(_Player); } //Keyboard Input private function KeyboardListener(e:KeyboardEvent){ if(e.keyCode == Keyboard.SPACE){ Clock = Clock_restart; Clock_ON = true; isJump = true; if(isJump){ _Player.gotoAndPlay("Fly"); speed = -Up_Speed; isJump = false; } } } //Mouse Input & Spawn Listener private function MouseListener(m:MouseEvent){ if(MouseEvent.CLICK){ Clock = Clock_restart; Clock_ON = true; isJump = true; if(isJump){ _Player.gotoAndPlay("Fly"); speed = -Up_Speed; isJump = false; } } } //Rotation Fly function Rot_Fly(){ if(Clock < Clock_max){ _Player.rotation = -15; }else if(Clock >= Clock_max){ if(_Player.rotation < 90){ _Player.rotation += 10; }else if(_Player.rotation >= 90){ _Player.rotation = 90; } } } //END //Update Function function enter_frame(e:Event):void{ Rot_Fly(); //Clock if(Clock_ON){ Clock++; }else if(Clock > Clock_max){ Clock = Clock_max; } //Fall Limits if(speed >= 20){ _Player.y += 20; return; _Player.gotoAndPlay("Fall"); } //Physics speed += _grav*3; _Player.y += speed; } //Hit Ground function Hit_Ground(e:Event){ if(_Player.hitTestObject(Ground1)){ _grav = 0; speed = 0; trace("HIT GROUND"); }else if(_Player.hitTestObject(Ground2)){ _grav = 0; speed = 0; trace("HIT GROUND"); }else if(_Player.hitTestObject(Ground1) == false){ _grav = 1; }else if(_Player.hitTestObject(Ground2) == false){ _grav = 1; } } //Background Slide (Left) private function Background_Move(e:Event):void{ Background1.x -= 1.5; Background2.x -= 1.5; Ground1.x -= 4; Ground2.x -= 4; if(Background1.x < -width_BG){ Background1.x = width_BG; } else if(Background2.x < -width_BG){ Background2.x = width_BG; } else if(Ground1.x < -width_BG){ Ground1.x = width_BG; } else if(Ground2.x < -width_BG){ Ground2.x = width_BG; } } } The eventListeners are in flash it self stage.addEventListener(Event.ENTER_FRAME, enter_frame); stage.addEventListener(Event.ENTER_FRAME, Hit_Ground); stage.addEventListener(KeyboardEvent.KEY_UP, KeyboardListener); stage.addEventListener(MouseEvent.CLICK, MouseListener); stage.addEventListener(Event.ENTER_FRAME, Background_Move); stage.addEventListener(Event.ADDED_TO_STAGE, Spawn);

    Read the article

  • about getadrrinfo() C++?

    - by Isavel
    I'm reading this book called beej's guide to network programming and there's a part in the book were it provide a sample code which illustrate the use of getaddrinfo(); the book state that the code below "will print the IP addresses for whatever host you specify on the command line" - beej's guide to network programming. now I'm curious and want to try it out and run the code, but I guess the code was develop in UNIX environment and I'm using visual studio 2012 windows 7 OS, and most of the headers was not supported so I did a bit of research and find out that I need to include the winsock.h and ws2_32.lib for windows, for it to get working, fortunately everything compiled no errors, but when I run it using the debugger and put in 'www.google.com' as command argument I was disappointed that it did not print any ipaddress, the output that I got from the console is "getaddrinfo: E" what does the letter E mean? Do I need to configure something out of the debugger? Interestingly I left the command argument blank and the output changed to "usage: showip hostname" Any help would be appreciated. #ifdef _WIN32 #endif #include <sys/types.h> #include <winsock2.h> #include <ws2tcpip.h> #include <iostream> using namespace std; #include <stdio.h> #include <string.h> #include <sys/types.h> #include <winsock.h> #pragma comment(lib, "ws2_32.lib") int main(int argc, char *argv[]) { struct addrinfo hints, *res, *p; int status; char ipstr[INET6_ADDRSTRLEN]; if (argc != 2) { fprintf(stderr,"usage: showip hostname\n"); system("PAUSE"); return 1; } memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; // AF_INET or AF_INET6 to force version hints.ai_socktype = SOCK_STREAM; if ((status = getaddrinfo(argv[1], NULL, &hints, &res)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(status)); system("PAUSE"); return 2; } printf("IP addresses for %s:\n\n", argv[1]); for(p = res;p != NULL; p = p->ai_next) { void *addr; char *ipver; // get the pointer to the address itself, // different fields in IPv4 and IPv6: if (p->ai_family == AF_INET) { // IPv4 struct sockaddr_in *ipv4 = (struct sockaddr_in *)p->ai_addr; addr = &(ipv4->sin_addr); ipver = "IPv4"; } else { // IPv6 struct sockaddr_in6 *ipv6 = (struct sockaddr_in6 *)p->ai_addr; addr = &(ipv6->sin6_addr); ipver = "IPv6"; } // convert the IP to a string and print it: inet_ntop(p->ai_family, addr, ipstr, sizeof ipstr); printf(" %s: %s\n", ipver, ipstr); } freeaddrinfo(res); // free the linked list system("PAUSE"); return 0; }

    Read the article

  • Doesn't get the output in Java Database Connectivity

    - by Dooree
    I'm working on Java Database Connectivity through Eclipse IDE. I built a database through Ubuntu Terminal, and I need to connect and work with it. However, when I tried to run the following code, I don't get any error, but the following output is showed, anybody knows why I don't get the output from the code ? //STEP 1. Import required packages import java.sql.*; public class FirstExample { // JDBC driver name and database URL static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost/EMP"; // Database credentials static final String USER = "username"; static final String PASS = "password"; public static void main(String[] args) { Connection conn = null; Statement stmt = null; try{ //STEP 2: Register JDBC driver Class.forName("com.mysql.jdbc.Driver"); //STEP 3: Open a connection System.out.println("Connecting to database..."); conn = DriverManager.getConnection(DB_URL,USER,PASS); //STEP 4: Execute a query System.out.println("Creating statement..."); stmt = conn.createStatement(); String sql; sql = "SELECT id, first, last, age FROM Employees"; ResultSet rs = stmt.executeQuery(sql); //STEP 5: Extract data from result set while(rs.next()){ //Retrieve by column name int id = rs.getInt("id"); int age = rs.getInt("age"); String first = rs.getString("first"); String last = rs.getString("last"); //Display values System.out.print("ID: " + id); System.out.print(", Age: " + age); System.out.print(", First: " + first); System.out.println(", Last: " + last); } //STEP 6: Clean-up environment rs.close(); stmt.close(); conn.close(); }catch(SQLException se){ //Handle errors for JDBC se.printStackTrace(); }catch(Exception e){ //Handle errors for Class.forName e.printStackTrace(); }finally{ //finally block used to close resources try{ if(stmt!=null) stmt.close(); }catch(SQLException se2){ }// nothing we can do try{ if(conn!=null) conn.close(); }catch(SQLException se){ se.printStackTrace(); }//end finally try }//end try System.out.println("Goodbye!"); }//end main }//end FirstExample <ConnectionProperties> <PropertyCategory name="Connection/Authentication"> <Property name="user" required="No" default="" sortOrder="-2147483647" since="all"> The user to connect as </Property> <Property name="password" required="No" default="" sortOrder="-2147483646" since="all"> The password to use when connecting </Property> <Property name="socketFactory" required="No" default="com.mysql.jdbc.StandardSocketFactory" sortOrder="4" since="3.0.3"> The name of the class that the driver should use for creating socket connections to the server. This class must implement the interface 'com.mysql.jdbc.SocketFactory' and have public no-args constructor. </Property> <Property name="connectTimeout" required="No" default="0" sortOrder="9" since="3.0.1"> Timeout for socket connect (in milliseconds), with 0 being no timeout. Only works on JDK-1.4 or newer. Defaults to '0'. </Property> ...

    Read the article

  • Can't save my picture

    - by mamii
    I want to save the image that I draw, but I always failure is reported. I have tested and tried but I can correct any errors. Therefore, I appeal to you. This store is for me as a "cancer sore". And what is the drawing application without the possibility shranjevnja? sucks: D Question: What is wrong with my code for storage? or anything else? Posts: 09-12 07:30:34.346: E / Panel (8003): IOEception 09-12 07:30:34.346: E / Panel (8003): java.io.IOException: Parent directory of file does not exist: / sdcard/anppp/2012Sep1273034.png 09-12 07:30:34.346: E / Panel (8003): at java.io.File.createNewFile (File.java: 1263) 09-12 07:30:34.346: E / Panel (8003): at aa.bb.cc.Panel.saveapp (Panel.java: 67) 09-12 07:30:34.346: E / Panel (8003): at aa.bb.cc.AndroidPaint.onOptionsItemSelected (AndroidPaint.java: 94) 09-12 07:30:34.346: E / Panel (8003): at android.app.Activity.onMenuItemSelected (Activity.java: 2170) 09-12 07:30:34.346: E / Panel (8003): at com.android.internal.policy.impl.PhoneWindow.onMenuItemSelected (PhoneWindow.java: 730) 09-12 07:30:34.346: E / Panel (8003): at com.android.internal.view.menu.MenuItemImpl.invoke (MenuItemImpl.java: 139) 09-12 07:30:34.346: E / Panel (8003): at com.android.internal.view.menu.MenuBuilder.performItemAction (MenuBuilder.java: 855) 09-12 07:30:34.346: E / Panel (8003): at com.android.internal.view.menu.ExpandedMenuView.invokeItem (ExpandedMenuView.java: 89) 09-12 07:30:34.346: E / Panel (8003): at com.android.internal.view.menu.ExpandedMenuView.onItemClick (ExpandedMenuView.java: 93) 09-12 07:30:34.346: E / Panel (8003): at android.widget.AdapterView.performItemClick (AdapterView.java: 284) 09-12 07:30:34.346: E / Panel (8003): at android.widget.ListView.performItemClick (ListView.java: 3285) 09-12 07:30:34.346: E / Panel (8003): at android.widget.AbsListView $ PerformClick.run (AbsListView.java: 1640) 09-12 07:30:34.346: E / Panel (8003): at android.os.Handler.handleCallback (Handler.java: 587) 09-12 07:30:34.346: E / Panel (8003): at android.os.Handler.dispatchMessage (Handler.java: 92) 09-12 07:30:34.346: E / Panel (8003): at android.os.Looper.loop (Looper.java: 123) 09-12 07:30:34.346: E / Panel (8003): at android.app.ActivityThread.main (ActivityThread.java: 4363) 09-12 07:30:34.346: E / Panel (8003): at java.lang.reflect.Method.invokeNative (Native Method) 09-12 07:30:34.346: E / Panel (8003): at java.lang.reflect.Method.invoke (Method.java: 521) 09-12 07:30:34.346: E / Panel (8003): at com.android.internal.os.ZygoteInit $ MethodAndArgsCaller.run (ZygoteInit.java: 860) 09-12 07:30:34.346: E / Panel (8003): at com.android.internal.os.ZygoteInit.main (ZygoteInit.java: 618) 09-12 07:30:34.346: E / Panel (8003): at dalvik.system.NativeStart.main (Native Method) There is code: private Bitmap mBitmap; private Canvas mCanvas; private Bitmap tmpBitmap; private Canvas tmpCanvas; private DrawHandler mDrawHandler; private Canvas tCanvas; private String mImagePath = Environment.getExternalStorageDirectory() + "/anppp"; private File file; public void saveapp() { Calendar currentDate = Calendar.getInstance(); SimpleDateFormat formatter= new SimpleDateFormat("yyyyMMMddHmmss"); String dateNow = formatter.format(currentDate.getTime()); file = new File(mImagePath + "/" + dateNow +".png"); FileOutputStream fos; try { file.createNewFile(); fos = new FileOutputStream(file); tmpBitmap.compress(Bitmap.CompressFormat.PNG, 100, fos); fos.close(); } catch (FileNotFoundException e) { Log.e("Panel", "FileNotFoundException", e); } catch (IOException e) { Log.e("Panel", "IOEception", e); } } That's it .. I do not know what could be wrong ;(

    Read the article

  • Upgrading from TFS 2010 RC to TFS 2010 RTM done

    - by Martin Hinshelwood
    Today is the big day, with the Launch of Visual Studio 2010 already done in Asia, and rolling around the world towards us, we are getting ready for the RTM (Released). We have had TFS 2010 in Production for nearly 6 months and have had only minimal problems. Update 12th April 2010  – Added Scott Hanselman’s tweet about the MSDN download release time. SSW was the first company in the world outside of Microsoft to deploy Visual Studio 2010 Team Foundation Server to production, not once, but twice. I am hoping to make it 3 in a row, but with all the hype around the new version, and with it being a production release and not just a go-live, I think there will be a lot of competition. Developers: MSDN will be updated with #vs2010 downloads and details at 10am PST *today*! @shanselman - Scott Hanselman Same as before, we need to Uninstall 2010 RC and install 2010 RTM. The installer will take care of all the complexity of actually upgrading any schema changes. If you are upgrading from TFS 2008 to TFS2010 you can follow our Rules To Better TFS 2010 Migration and read my post on our successes.   We run TFS 2010 in a Hyper-V virtual environment, so we have the advantage of running a snapshot as well as taking a DB backup. Done - Snapshot the hyper-v server Microsoft does not support taking a snapshot of a running server, for very good reason, and Brian Harry wrote a post after my last upgrade with the reason why you should never snapshot a running server. Done - Uninstall Visual Studio Team Explorer 2010 RC You will need to uninstall all of the Visual Studio 2010 RC client bits that you have on the server. Done - Uninstall TFS 2010 RC Done - Install TFS 2010 RTM Done - Configure TFS 2010 RTM Pick the Upgrade option and point it at your existing “tfs_Configuration” database to load all of the existing settings Done - Upgrade the SharePoint Extensions Upgrade Build Servers (Pending) Test the server The back out plan, and you should always have one, is to restore the snapshot. Upgrading to Team Foundation Server 2010 – Done The first thing you need to do is off the TFS server and then log into the Hyper-v server and create a snapshot. Figure: Make sure you turn the server off and delete all old snapshots before you take a new one I noticed that the snapshot that was taken before the Beta 2 to RC upgrade was still there. You should really delete old snapshots before you create a new one, but in this case the SysAdmin (who is currently tucked up in bed) asked me not to. I guess he is worried about a developer messing up his server Turn your server on and wait for it to boot in anticipation of all the nice shiny RTM’ness that is coming next. The upgrade procedure for TFS2010 is to uninstal the old version and install the new one. Figure: Remove Visual Studio 2010 Team Foundation Server RC from the system.   Figure: Most of the heavy lifting is done by the Uninstaller, but make sure you have removed any of the client bits first. Specifically Visual Studio 2010 or Team Explorer 2010.  Once the uninstall is complete, this took around 5 minutes for me, you can begin the install of the RTM. Running the 64 bit OS will allow the application to use more than 2GB RAM, which while not common may be of use in heavy load situations. Figure: It is always recommended to install the 64bit version of a server application where possible. I do not think it is likely, with SharePoint 2010 and Exchange 2010  and even Windows Server 2008 R2 being 64 bit only, I do not think there will be another release of a server app that is 32bit. You then need to choose what it is you want to install. This depends on how you are running TFS and on how many servers. In our case we run TFS and the Team Foundation Build Service (controller only) on out TFS server along with Analysis services and Reporting Services. But our SharePoint server lives elsewhere. Figure: This always confuses people, but in reality it makes sense. Don’t install what you do not need. Every extra you install has an impact of performance. If you are integrating with SharePoint you will need to run this install on every Front end server in your farm and don’t forget to upgrade your Build servers and proxy servers later. Figure: Selecting only Team Foundation Server (TFS) and Team Foundation Build Services (TFBS)   It is worth noting that if you have a lot of builds kicking off, and hence a lot of get operations against your TFS server, you can use a proxy server to cache the source control on another server in between your TFS server and your build servers. Figure: Installing Microsoft .NET Framework 4 takes the most time. Figure: Now run Windows Update, and SSW Diagnostic to make sure all your bits and bobs are up to date. Note: SSW Diagnostic will check your Power Tools, Add-on’s, Check in Policies and other bits as well. Configure Team Foundation Server 2010 – Done Now you can configure the server. If you have no key you will need to pick “Install a Trial Licence”, but it is only £500, or free with a MSDN subscription. Anyway, if you pick Trial you get 90 days to get your key. Figure: You can pick trial and add your key later using the TFS Server Admin. Here is where the real choices happen. We are doing an Upgrade from a previous version, so I will pick Upgrade the same as all you folks that are using the RC or TFS 2008. Figure: The upgrade wizard takes your existing 2010 or 2008 databases and upgraded them to the release.   Once you have entered your database server name you can click “List available databases” and it will show what it can upgrade. Figure: Select your database from the list and at this point, make sure you have a valid backup. At this point you have not made ANY changes to the databases. At this point the configuration wizard will load configuration from your existing database if you have one. If you are upgrading TFS 2008 refer to Rules To Better TFS 2010 Migration. Mostly during the wizard the default values will suffice, but depending on the configuration you want you can pick different options. Figure: Set the application tier account and Authentication method to use. We use NTLM to keep things simple as we host our TFS server externally for our remote developers.  Figure: Setting your TFS server URL’s to be the remote URL’s allows the reports to be accessed without using VPN. Very handy for those remote developers. Figure: Detected the existing Warehouse no problem. Figure: Again we love green ticks. It gives us a warm fuzzy feeling. Figure: The username for connecting to Reporting services should be a domain account (if you are on a domain that is). Figure: Setup the SharePoint integration to connect to your external SharePoint server. You can take the option to connect later.   You then need to run all of your readiness checks. These check can save your life! it will check all of the settings that you have entered as well as checking all the external services are configures and running properly. There are two reasons that TFS 2010 is so easy and painless to install where previous version were not. Microsoft changes the install to two steps, Install and configuration. The second reason is that they have pulled out all of the stops in making the install run all the checks necessary to make sure that once you start the install that it will complete. if you find any errors I recommend that you report them on http://connect.microsoft.com so everyone can benefit from your misery.   Figure: Now we have everything setup the configuration wizard can do its work.  Figure: Took a while on the “Web site” stage for some point, but zipped though after that.  Figure: last wee bit. TFS Needs to do a little tinkering with the data to complete the upgrade. Figure: All upgraded. I am not worried about the yellow triangle as SharePoint was being a little silly Exception Message: TF254021: The account name or password that you specified is not valid. (type TfsAdminException) Exception Stack Trace:    at Microsoft.TeamFoundation.Management.Controls.WizardCommon.AccountSelectionControl.TestLogon(String connectionString)    at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) [Info   @16:10:16.307] Benign exception caught as part of verify: Exception Message: TF255329: The following site could not be accessed: http://projects.ssw.com.au/. The server that you specified did not return the expected response. Either you have not installed the Team Foundation Server Extensions for SharePoint Products on this server, or a firewall is blocking access to the specified site or the SharePoint Central Administration site. For more information, see the Microsoft Web site (http://go.microsoft.com/fwlink/?LinkId=161206). (type TeamFoundationServerException) Exception Stack Trace:    at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.VerifyTeamFoundationSharePointExtensions(ICredentials credentials, Uri url)    at Microsoft.TeamFoundation.Admin.VerifySharePointSitesUrl.Verify() Inner Exception Details: Exception Message: TF249064: The following Web service returned an response that is not valid: http://projects.ssw.com.au/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. Either the extensions are not installed, the request resulted in HTML being returned, or there is a problem with the URL. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://projects.ssw.com.au. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServerInvalidResponseException) Exception Data Dictionary: ResponseStatusCode = InternalServerError I’ll look at SharePoint after, probably the SharePoint box just needs a restart or a kick If there is a problem with SharePoint it will come out in testing, But I will definatly be passing this on to Microsoft.   Upgrading the SharePoint connector to TFS 2010 You will need to upgrade the Extensions for SharePoint Products and Technologies on all of your SharePoint farm front end servers. To do this uninstall  the TFS 2010 RC from it in the same way as the server, and then install just the RTM Extensions. Figure: Only install the SharePoint Extensions on your SharePoint front end servers. TFS 2010 supports both SharePoint 2007 and SharePoint 2010.   Figure: When you configure SharePoint it uploads all of the solutions and templates. Figure: Everything is uploaded Successfully. Figure: TFS even remembered the settings from the previous installation, fantastic.   Upgrading the Team Foundation Build Servers to TFS 2010 Just like on the SharePoint servers you will need to upgrade the Build Server to the RTM. Just uninstall TFS 2010 RC and then install only the Team Foundation Build Services component. Unlike on the SharePoint server you will probably have some version of Visual Studio installed. You will need to remove this as well. (Coming Soon) Connecting Visual Studio 2010 / 2008 / 2005 and Eclipse to TFS2010 If you have developers still on Visual Studio 2005 or 2008 you will need do download the respective compatibility pack: Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 If you are using Eclipse you can download the new Team Explorer Everywhere install for connecting to TFS. Get your developers to check that you have the latest version of your applications with SSW Diagnostic which will check for Service Packs and hot fixes to Visual Studio as well.   Technorati Tags: TFS,TFS2010,TFS 2010,Upgrade

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

< Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >